On Tue, 27 Jul 2004 Ted.Harding@nessie.mcc.ac.uk wrote:
Hi Folks,
As we all know, the "free memory" reported by e.g. 'free' is usually very small. E.g. on this machine (endowed with "a massive 128MB RAM" (OK it's an oldish laptop and I'm cynically quoting typical ad material) I get
free
total used free shared buffers cached
Mem: 126024 117952 8072 0 1476 50184 -/+ buffers/cache: 66292 59732 Swap: 400640 124092 276548
and of course the "free 8072" is illusory. In reality much more is available -- at least, I suppose, the 59732KB "free" in buffers/cache, though I;m not really sure about this since I don't know the underlying details of this report.
Linux uses spare memory for disk buffering/caching. so the top "free" figure will always be small on any well used machine. It doesn't need this disk buffering, it can use none or all your memory - the figure below is the amount of memory available for use by an application - if all were used then there would be no extensive disk buffering/caching.
My Question is: suppose I want to embark on a RAM-intensive task, e.g. a numerical computation with large objects, or maybe a CD burn, where I'd like to make the most use of RAM. I can tell the application how much RAM it can expect to use.
So: How can I determine, by some system command, how much of the RAM is actually up for grabs. so that I can tell the program to use (e.g.) 64MB RAM?
In other words, how could I find out that 64MB RAM would be available?
And should I use 'nice' to make sure that the program can jump the queue?
All advice appreciated!
Best wishes to all, Ted.
E-Mail: (Ted Harding) Ted.Harding@nessie.mcc.ac.uk Fax-to-email: +44 (0)870 167 1972 Date: 27-Jul-04 Time: 19:18:28 ------------------------------ XFMail ------------------------------