Single table or split the table

I have an application that collect data from 10 server every minutes and stored to mysql db,. there are about 10K record from each server on every minutes. that data will need to stored in 1 month.

Which ones is better, create single table for all servers and put field to identified data source or create multiple tables for each server.

Thanks

Having used both strategies, I'm not sure mysql is really suited for frequent data logging at all. You end up with huge, weighty indexes that take minutes to load and hog all your RAM for data that's already sorted by time anyway.

I ended up just storing data in raw text flatfiles. No index, ergo no index issues. A few years of various kinds of sensor data, logged every 5 minutes, is still just 72 megs.

@before4 you didn't mentioned how you'll use the logs.

In my opinion storing in mysql is better than flat files. querying become really easy.
Make sure to partition your table like on date & server_name eg: partition for every day and create local indexes based on kind of usage you want.

What is really slick is a RDBMS with partitioned tables, so removing and adding are fast and simple. You can name and recycle the same partitions for the 31 days. If the primary key is time or insert order, you can mix servers in the same partition, but RDBMS guys are hyper this way, so they usually use separate partitions and you get strict time order in each for free. Yes, indexes to suport slow queries are good, if they do not slow insert too much. Definitely one table.

Ever consider near real time flow using remote clients? It often makes the data more useful, and there are not such large processing surges. Batch went out with punched cards.

1 Like