On 08/03/12 08:33, Wayne Stallwood wrote:
There is an overhead associated with launching scripts at that frequency and of course you have to take steps to make sure a problem doesn't end up with the script running into itself or when the system is under heavy load....probably it was decided that giving cron 1 second granularity might just encourage people to do bad things :)
Agreed. Although that said, the cut-off point is arbitrary, and cron has been able to go down to 1min resolution for many years (decades?), so all the arguments that apply to 1s resolution must have applied to 1m resolution 20 years ago. The overhead involved in starting a script must be considerably smaller (in relation to the capabilities of the typical host system) than it used to be, and I would be surprised if 1s resolution isn't actually just as "sensible" (or otherwise) as 1m resolution was when cron first adopted it.
In my case, writing a daemon is definitely the right option. But for a quick and dirty test system 1s cron would be adequate (and may actually prove to remain adequate in the real world). There are advantages in creating new instances; the impact of things like memory leaks etc are confined to the 1s execution rather than compounded over time. This is not an argument against doing things properly, but it is still one option for making a system more reliable.
Although my need here is at a "micro" level, at a macro level we have situations where a desktop web browser (Firefox) has to be restarted daily to avoid accumulated problems - if we don't then it just gets slower and slower and consumes more and more RAM. We do what we can to avoid the core problem, but the daily kill-and-restart improves reliability; if my system is capable of doing what I need in 0.5s under a cron-like system it doesn't really matter if that's 0.4s overhead and it would complete in 0.1s as a daemon (those figures aren't real measured ones, I haven't got that far yet!) because the trade-off in time where it makes no difference to the results is justified for the increased resilience of the system as a result.
As an aside, the problem with "while (1) { do (something); sleep(1); }" is that it won't actually run every second; the time taken to do(something) isn't taken into account. In my case, if do(something) takes more than a second for any reason, it's actually more important that I kill it and start again than it is to let it complete. Agreed that this should all be managed within a daemon; I just want that daemon to be a tried-and-tested scheduler like cron (if such a thing exists), not something homebrewed.
Mark