on Fri, Sep 21, 2001 at 07:33:36PM +0100, Steve Fosdick scribbled:
One difference between free software and proprietry in this respect is that with proprietry code you can hide sloopy coding which makes it attractive if it helps to meet a deadline. With free or open source software it is there for anyone to see, and someone who takes pride in his work would not want to seen to be so lax.
This is true, to an exent. The problem is, it assumes someone looks at the code. Unless your project is widely used, or someone is specifically asked to do so, the probability of someone auditing your code at all thoroughly is unlikely. WireX are about to launch a site that addresses code auditing, it sounds quite neat. (or they were, I can't find the email just at the moment..) Basically, it's the sourceforge of code auditing, with auditors ratings, projects, etc.
The above difference may account for fewer bugs in free software but I suspect it is not the major factor in a smaller number of successful attacks. I suspect the lower number of successful attacks is due to these:
- Lower installed base for Linux.
- Fast time to fix when security related bugs are found.
- More savy admins who keep the systems up to date with fixes.
4. Different security model. I have not used 2k/NT a lot, but I've worked with admins on them and have never seen them either: 1. Log in as a user who isn't a local administrator. 2. Used a simple utility like su or sudo to temporarily increase their privileges for a task. 5. Less understanding and in depth experience of unix in general amongst the virus making community.
There are two separate issues here:
- The bug that allows paths outside the web space.
How many people have apache chroot()d? How many still have a parent process running as root with all available (posix.1e) capabilities enabled? Or even, how many people use suexec?
Under nix, the ability to close this is in place, but it is not used much because it's tricky and can make things more awkward or the user is just ignorant of the facilities. Security needs to be the default and it needs to be documented and easy.
- Allowing a program which was not specifically written as a web program to be used from the web server unmodied.
I like this feature. To stop it you potentially have to look at code signing, different binary formats, different syscall/api call systems, trusted path execution, etc. all of which are messy and in the end, avoidable.
I think this vulnerability demonstrates not only a sloppy implementation, but also a flawed approach to security. It would seem that whenever there is the choice of giving developers a chance to control the user, or implementing some sensible security, M$ do the former.
Yes, it's the old security vs. usability argument alas.
In this case it is time that IE (and e-mail programs) deliberately made it hard to execute code downloaded from the net rather than making it easy - that would ensure people didn't so it unwhittingly and the software didn't do it by mistake. The consequences are too serious to be playing around.
Why not run certain applications in a sandbox? Or at least with reduced privileges. eg. web browser can *only* write to home_directory/.web/ and also has certain system calls disabled, like the ability to execute anything outside some trusted path (because users want helper applications like flash, telnet, email, etc)
So, back to MJR's email in which he says these are windows problems. I beg to differ. I think these are M$ problems and if M$ dained to write software for Linux it would be fraught with all the same issues.
I also think they are programmer problems. Arguments are made saying it's the language the programmer uses. For example: "c has buffer overflows, use perl/whatever!", but once you take out buffer overflows you end up with the inability of programmers to create a decent authentication system. (Think hotmail, banks, etc)