I'm writing an application to run on small ARM units. It collects data from various sources at varying frequencies (up to once per second) and needs to store them. The number of discrete values will vary but we're running tests on around 200 data points dropping the data into MySQL. By "varying frequencies" what I mean is that some items will be read every second, others less frequently (say once every 30mins), so I need a data structure that will cope with that but my tests are currently on the worst-case of storing everything every second (or trying to!)
One option is to have a table per item, comprising little more than timestamp and value. Our tests so far have taken 2-5s to store 200 samples this way. We're looking at other ways (eg one table for all samples, storing an item id, timestamp and value). However I can't help think that MySQL is overkill and an overhead we could do without.
So, does anyone have any suggestions? We could build our own data storage system - the applications I've seen before that do this would probably have a single binary file per day/week/month of data for any single item, storing a single timestamp (marking the start of the file) and sample period, followed by 4 bytes per sample at the appropriate offset into the file from there; reading or writing just becomes a case of working out which file to look at and the offset into that file. But I'd rather not go quite so low-level if I can avoid it.