On Tue, 1 Nov 2005 10:47:07 +0000 "Ashley T. Howes, Ph.D." lists@ashleyhowes.com wrote:
How do I do central authentication?
As far as I know there are least the following possibilities:
1. NIS 2. NIS+ 3. Kerberos 4. LDAP
And I believe Microsoft embraced and extended one of those to be Microsoft Active Directory (is it Kerberos?)
Could I use NFS for shared file access with user /home being mounted as r/w at login, or would this cause a denial of service attack on the server? Are there more scalable alternatives with intermediate caching, etc?
Will > 500 desktops require more than one server for shared file access? What are the preferred replication procedures in this case?
NFS can be used to mount remote home directories. I am not sure what you mean by the DoS attack though - are you referring to many people trying to log on at the same time?
NFS can be attacked deliberately but it has other security issues too.
Personally I would think a modern server-class machine should have no trouble with 500 clients but it will depend on exactly what the clients are doing so don't quote me on that.
It is also possible to spread people's home directories around on different servers - in fact if people normally use the machine on their own desk it has often been done to put there home directory there on that machine so maximise performance when they're at the desk, then remote mount it if they log in elsewhere. That does mean greater demands for physical security and careful management of backups.
What the best way to network that many machines? I can run switches and bridges between the machines. Should I load-balance or shape the traffic in some way to ensure fair and fast access to the shared file store? How do I check for network saturation other than looking at blinking coloured lights :)
One "finger in the air" approach to this would be to use more bandwidth the closer you get to the server. For example the server could run Gig Ethernet between the server and a central switch then 100Mbit out to the individual machines or a second layer of swicthes/hubs. That way any one machine can at most use 100Mbit capacity to the server, i.e. 1/10 of it's potential.
I am sure there are more elaborate schemes too but so much depends on what the machines are actually doing and what the consequences of a go slow would be.
Security. I would assume standard unix security would stop one user fiddling with another's files, but how does that work over a network file store?
With NFS it works the same way in that the kernel will enforce the Unix ownership and permission bits even for remote files. Identifying file ownership and group though relies on syncronised /etc/passwd /etc/group via NIS of similar as all the authenication is done by numeric ID rather than by user name or token.
Am I correct to assume that it is best to have applications installed locally on the machines given the number of machines?
It depends on what you're doing but they'd usually start faster that way. On the other hand you then have all those machines to upgrade when the time comes (though if the applications are in debian packages this can be part of the same automated process with apt to apply secuity updates etc.)
Is there a way to sandbox the user, so they don't enter parts of the system they aren't supposed to? I know that there is the normal root access restriction, but what about forcing them to use their network file store, rather than leaving stuff in /tmp.
We make sure people don't use /tmp for permanent stuff by having a system tidy up script tidy it up. It was just like we had a rule that no-one could complain if unlabeled media got re-used for something else. If you couldn't be bothered to put a label on your tape or disk you obviously didn't want the contents and the same with /tmp vs. a home directory.
If the user's are using a captive set of applications then further possibilities exist with the applications concerned.
I know quite a bit of this depends on what the machines will be actually used for. At present, I am looking for more general advice from people who have done this before to help guide my research into this topic.
In fact, I am also interested on how this would work on non-linux machines, e.g. Windows and Macs. Do these need to run separatedly, or could non-Linux based machines authenticate against a shared profile directory? I know there are many large windows based deployments. For example the Norfolk Library Services runs one for internet and application access. In fact, they seems to run a reduced version of Windows that limits what the user can and can't do. Does anyone know how that actually works? To keep this on topic, please email me off-list for non-linux based discussions.
--
Ashley T. Howes, Ph.D. http://www.ashleyhowes.com
"The philosophers have only interpreted the world in different ways; the point is to change it." - Karl Marx
main@lists.alug.org.uk http://www.alug.org.uk/ http://lists.alug.org.uk/mailman/listinfo/main Unsubscribe? See message headers or the web site above!