A Step-by-Step Guide to Building a Monitoring System with Flask and SQLAlchemy

Shikha Pandey
6 min readFeb 26, 2023

--

Introduction:

In this article we are taking Restaurant monitoring as an example. You can use it for other system as well.

Our System monitors several restaurants in the country and needs to monitor if the store is online or not. Due to some unknown reasons, a store might go offline for a few hours. Restaurant owners want to get a report of how often this happened in the past. In this tutorial, we will build a Restaurant Monitoring System with Flask and SQLAlchemy to track the store’s uptime and downtime based on business hours.

Project Structure:

├── app
│ ├── __init__.py
│ ├── database.py
│ ├── models.py
│ ├── reports.py
│ ├── routes.py
│ ├── services.py
│ └── time.py
├── data
│ ├── business_hours.csv
│ ├── stores.csv
│ └── timezones.csv
├── .gitignore
├── LICENSE
├── README.md
├── config.py
├── REQUIREMENTS.txt
└── run.py

In this project, we have an app directory that contains all the Flask application files. The database.py file contains the SQLAlchemy database configuration. The models.py file defines the database schema. The routes.py file contains the Flask API endpoints for triggering and getting the report. The services.py file reads CSV files and loads data into the database. The time.py file contains the function for computing time.

We also have a data directory that contains the CSV files required for the project. The config.py file contains the database URI, and the run.py file starts the Flask application.

API Requirements:
We need two APIs for this project:

1. /trigger_report endpoint that will trigger report generation from the data provided (stored in DB)

· No input

· Output: report_id (random string)

· report_id will be used for polling the status of report completion

2. /get_report endpoint that will return the status of the report or the CSV

· Input: report_id

· Output:

a. If report generation is not complete, return “Running” as the output

b. If report generation is complete, return “Complete” along with the CSV file with the schema described above.

Data Output Requirement:
We want to output a report to the user that has the following schema:

store_id, uptime_last_hour(in minutes), uptime_last_day(in hours), update_last_week(in hours), downtime_last_hour(in minutes), downtime_last_day(in hours), downtime_last_week(in hours)

· Uptime and downtime should only include observations within business hours.

· We need to extrapolate uptime and downtime based on the periodic polls we have ingested, to the entire time interval.

Building the Flask Application:
We will start by creating a new Flask application in the app/__init__.py file:

We also need to define the Config class in config.py:

The SQLALCHEMY_DATABASE_URI variable specifies the database connection string, and SQLALCHEMY_TRACK_MODIFICATIONS is set to False to disable the modification tracking feature.

Next, let’s create the database schema in app/models.py:

The above code defines four tables in the database — Store, BusinessHours, Status, and Reports. The Store table stores information about the store, including its name and timezone. The BusinessHours table stores information about the business hours of the store, including the weekday, start time, and end time. The Status table stores the status of the store, including the timestamp and whether it was online or not. The Report table is used to store the generated report based on the data available.

We have used pandas library to read the data and store into the database. For example:

We have defined chunk size to divide the large files into chunks, which will be easier to store in the database.

Next, we need to define the two endpoints for our API in app/routes.py — /trigger_report and /get_report.

The /trigger_report endpoint triggers the report generation process and returns a report ID, which can be used to check the status of the report. Here’s the code for this endpoint:

The /get_report endpoint returns the status of the report or the CSV file. If the report generation is not complete, it returns “Running” as the output. If the report generation is complete, it returns “Complete” along with the CSV file with the schema described above. Here’s the code for this endpoint:

That’s it for the code implementation of the Restaurant Monitoring System. We have defined the database schema, created two API endpoints, and written a service to import data from CSV files into the database.

In the next section, we will discuss how to use this system to generate reports and monitor restaurant uptime.

Report Generation

Our system needs a way to generate reports for all restaurants that monitor their uptime. For this purpose, we will create two APIs using Flask.

The first API endpoint is /trigger_report, which will trigger the report generation. It does not require any input and will return a report_id, which can be used to poll the status of the report.

The second API endpoint is /get_report, which will return the status of the report or the CSV. It takes the report_id as input and returns one of the following:

  • If report generation is not complete, it will return “Running” as the output.
  • If report generation is complete, it will return “Complete” along with the CSV file with the schema described above.

We will implement these APIs in routes.py and use the reports.py module to generate the report.

Monitoring Restaurant Uptime

Our System needs a way to monitor the uptime of all the restaurants. To do this, we will use the data stored in our database and generate a report that shows the uptime and downtime of each restaurant.

We will need to perform the following steps to generate the report:

  1. Extract the data for each restaurant from the database.
  2. For each restaurant, calculate the uptime and downtime based on the observations within business hours.
  3. Extrapolate the uptime and downtime based on the periodic polls we have ingested, to the entire time interval.
  4. Generate a report that shows the uptime and downtime of each restaurant.

We will implement these steps in the times.py module.

Performance improvements and Optimization

Earlier data loading was taking too much time. So it needed optimization to make it work faster.

We have implemented multithreading and caching to optimize the data loading process and improve the performance of the system.

In the run.py file, multithreading has been implemented to import data in a non-blocking manner. This means that while the data is being imported into the database, the application can continue to respond to user requests without being blocked by the import process. This can help improve the user experience by ensuring that the application remains responsive even when large amounts of data are being imported.

Additionally, the data that is imported into the database is also stored in cache using the functools.lru_cache decorator. This allows the application to quickly access the data without having to query the database every time. The cache is automatically refreshed every hour to check for new data, ensuring that the application always has access to the latest information. You can see in the example below:

By using multithreading and caching, this Restaurant Monitoring System can provide fast and responsive performance even when dealing with large amounts of data.

Conclusion

In this article, we discussed how to build a system to monitor the uptime of restaurants and generate reports. We used Flask to create two APIs that trigger the report generation and return the status of the report. We also discussed how to monitor the uptime of restaurants and generate a report that shows the uptime and downtime of each restaurant.

By using this system, Our system can monitor the uptime of all the restaurants and generate reports to see how often a store went offline during business hours. This can help our system to improve the uptime of its restaurants and provide better service to its customers.

This is just one example of how data can be collected, processed, and analyzed to generate meaningful insights. There are many such systems in use today, and as the amount of data available continues to grow, it becomes increasingly important to have robust and scalable systems in place to handle this data. I hope this article has provided some insights into how such systems can be designed and implemented.

You can checkout the whole project on my GitHub: click here

I hope you found this article useful. If you have any questions or feedback, please feel free to leave a comment below.

--

--

Shikha Pandey

Software Engineer - Tech Enthusiast - Startup Enthusiast. Reach me out at https://shikhapandey.me/:)