My experience in Lambda School Labs

Phyx
3 min readApr 29, 2021

Throughout my time at lambda school I have had the opportunity to learn many things. Both about myself and the code I’ve worked with. In the labs portion of this year and a half long bootcamp we were presented with a real life project. Labs presented us with stakeholder meetings and daily standups with our teammates. As well as career readiness mini-projects to help us build professional assets like our resumes and linkedin profiles. The project we were given was named Cityspire, a one stop solution for users looking to move to a new city. The code we received was inherited from a previous team who had already went above and beyond on their work. The team was comprised of full stack web developers, data scientists and ios developers. I was among the data science team tasked with creating new api endpoints to allow for more features to be used.

The features that were presented to us were to supply the front end with jobs data, weather data, traffic data, housing data, and walkability scores. Among those, I volunteered for traffic data with an idea that I could find a traffic data api fit for the job. I was sadly mistaken. Most sources of traffic data were either small data sets or behind a pay wall of some kind. So instead of looking for data ready for use, I searched for different ways to precure it. I found that the TomTom traffic index supplied me with a varying amount of traffic data for my endpoint, and I took advantage of it.

The data I needed to scrape was the 2019 data, because we mutually agreed as a team that the 2020 year would not be an accurate representation of traffic because of the pandemic. The problem I ran into was that beautiful soup would only scrape the current html on the page and not the html that dynamically changed when a tab was clicked. To combat this with the help of my teams tpl we discovered that we could access the data’s json directly on the browser and open it using urllib. Using an already created dataset left by my predecessors, I sorted a subset of the cities by total population and scraped the data into its own csv file.

Both the back end endpoints and the front end have been fully deployed. The back end endpoints for traffic data features world rank of most congested cities for the entire year. Am peak and pm peak congestion averages, and the worst day of the year. It provides job data for searched terms, busability and bikeability scores, and weather data, including current temperature, what it feels like, humidity, and the mins and max’s for the given day.

Future features that could be included in the future would be a scrolling bar to get recommendations. Based on user inputs instead of having them auto recommended by previous user selections. Housing data could be added in the future but there is a challenge. Just like the traffic data, almost all relevant housing data is stuck behind a paywall, or requires a realtors license for a specific state to gain access to it.

--

--

Phyx

An engineering student with a passion to sate curious minds.