David France - Software Developer

Python

Capstone Project - Improving Database Dashboard

For this project I took a full stack dashboard created in a previous class and improved it in three categories – software development and designs, data structures and algorithms, and databases.

I chose to do all three enhancements on the same artifact – a dashboard for the fictional company Grazioso Salvare that allows their team members to interact with a database from the Austin Animal Shelter, which holds information on the animals in their system. The dashboard was created with a model-view-controller design pattern using MongoDB as the model, a custom Python class -animalshelter - as the controller, and a Jupyter Notebook file as the view.

I chose to use this artifact because it only had read functionality, so I had the opportunity to add create, update, and delete functions. There was also room for improving the view by adding custom filters and the speed by manipulating the database and adding an algorithm to automatically sort animals on create or update. Overall, it gave me the opportunity to incorporate security, input control, data structure, and code design into a polished final product.

View complete project with details of each step here

These videos detail the project before and after enhancements:

Original code files:

Updated code files:



Machine Learning - Solving Maze

Image of initial maze
Image of end of maze

For this project I trained an intelligent agent, using Deep Q learning, to solve the problem of successfully finding a route to exit a maze. Our intelligent agent solves the maze in much the same way as a human, initially exploring the new world, then using that knowledge to find a successful route. At each square, the agent either picks a direction to go at random or reaches into its neural network for the best possible choice at that square. The decision to explore or exploit is based on a programmer-decided variable to determine the percentage of the time the agent spends exploring. After each decision, the agent is rewarded for its choice – the better the choice, the higher the reward. Eventually, the agent has explored every square and knows the best choice to make at each one to maximize total reward, therefore developing a winning route.

For the agent, a major part of the process is the exploration v. exploitation decision. Exploration is manifested in the agent making random moves to eventually visit every cell in the maze, then storing the reward from the result. Exploitation means the agent uses their neural network model to determine the best available move for a given cell. Ideally, the agent will spend much of the early game exploring the maze, and much of the later stages exploiting it. To that end, I designed my agent to have an 80% chance of exploring early in the process, with a 5% decrease in exploration chance for each epoch. This can be seen in a starting epsilon of 0.8, and a learning_rate value of 0.95. This resulted in finishing training in 154 epochs, as opposed to 225 for the epsilon of 0.1 and no learning rate used in the project two milestone. Experimenting with different epsilon and learning_rate values did not yield an improvement, so I ended up with 0.8 and 0.95 respectively.

View complete code for this project: