Hi all,
I'm looking for some advice and hints on how to go about installing and managing a large group of desktop machines, with shared file stores and resources such as applications, printers, etc. By large, I mean more than 500 machines. Think of a large university lab or internet cafe, where people can login to any machine, do their 'work', access their files independent of terminal used, etc.
Based on my knowledge so far, under Debian GNU/Linux, the actual installation of all the machines is relatively straight-forward. Once I have setup the roles and machine groupings (based on tasks and application needs), I can use the FAI (Fully Automated Installer) which hold master 'settings' to assign each machine to a group and watch the install take place.
As and when machines fail, I can just configure a new one for a specific group and then let it install and put it on the network.
My confusion comes when I want to work out how to get central authentication, shared file stores, and where the line should be drawn for local/network storing of applications, files, etc. There is also the problem of scaling to 500+ machines.
I know this is possible, as I used to see it in action at UEA when I studied in SYS. xdm used to talk to something (LDAP?) before allowing login. The directory of users/passwords was shared between nearly all machines so you could login to any machine using your assigned userid.
So: 1) How do I do central authentication?
2) Could I use NFS for shared file access with user /home being mounted as r/w at login, or would this cause a denial of service attack on the server? Are there more scalable alternatives with intermediate caching, etc?
3) Will > 500 desktops require more than one server for shared file access? What are the preferred replication procedures in this case?
4) What the best way to network that many machines? I can run switches and bridges between the machines. Should I load-balance or shape the traffic in some way to ensure fair and fast access to the shared file store? How do I check for network saturation other than looking at blinking coloured lights :)
5) Security. I would assume standard unix security would stop one user fiddling with another's files, but how does that work over a network file store?
6) Am I correct to assume that it is best to have applications installed locally on the machines given the number of machines?
7) Is there a way to sandbox the user, so they don't enter parts of the system they aren't supposed to? I know that there is the normal root access restriction, but what about forcing them to use their network file store, rather than leaving stuff in /tmp.
I know quite a bit of this depends on what the machines will be actually used for. At present, I am looking for more general advice from people who have done this before to help guide my research into this topic.
In fact, I am also interested on how this would work on non-linux machines, e.g. Windows and Macs. Do these need to run separatedly, or could non-Linux based machines authenticate against a shared profile directory? I know there are many large windows based deployments. For example the Norfolk Library Services runs one for internet and application access. In fact, they seems to run a reduced version of Windows that limits what the user can and can't do. Does anyone know how that actually works? To keep this on topic, please email me off-list for non-linux based discussions.
--
Ashley T. Howes, Ph.D. http://www.ashleyhowes.com
"The philosophers have only interpreted the world in different ways; the point is to change it." - Karl Marx