Hi,
Here's one for the networking gurus on this list :-)
At work we are intending on using a Linux machine running snort as an IDS (much cheaper than the Cisco alternative) with two gigabit nics.
We have been doing some throughput tests, and are not convinced this will work, these tests were without installing snort.
Our test setup is as follows:-
----------- ---------- ----------- - Windows - - Linux - - Windows - - 2003 srv------- ------- 2003 srv- ----------- --------- ----------- 10.128.30.2 eth0 10.128.30.1 10.128.32.2 eth1 10.128.32.1
All machines have gigabit NICS, and are connected via a Foundry gigabit switch. The Linux machine we used was an HP Proliant 3.0 GHz Dual Xeon with twin onboard Broadcom NICs using the tigon driver. We had the linux machine configured as a basic router.The linux machine was running knoppix, booted into textonly mode, with nics manually configured (ie only bash and the kernel running, and no iptables). We were doing the tests using netperf. We had applied the tweaks to the NICs from this site
http://www.enterpriseitplanet.com/networking/features/article.php/3497796
If we put the two windows machines onto the same subnet, and using the "Network Utilisation" graph in Task Manger showed we were running at 85-90% util. When we had the linux machine acting as a router between the two machines, this dropped to 35% util. Admittedly this probably isn't the best method.
Using netperf between the windows and linux machine, again gave 85-90% util. This was with both windows machines sending data to both nics of the linux machine at the same time.
We cant understand how the throughput can more than half, when sending between the two windows machines via the linux machine. This system is going to go on a very busy network, so speed is essential.
We had tried the above test with a DELL single processor Xeon with onboard Intel and PCI Realtek gig-nics, and got very similar results. All tests were done with Knoppix 3.9 (2.6.11 kernel). On the production system we'll probably be using Redhat Enterprise.
Anyone got any ideas? Are we missing something in the config?
Many Thanks
Chris
Chris Glover wrote:
Hi,
Morning
Here's one for the networking gurus on this list :-)
IDS geek do?
At work we are intending on using a Linux machine running snort as an IDS (much cheaper than the Cisco alternative) with two gigabit nics.
We have been doing some throughput tests, and are not convinced this will work, these tests were without installing snort.
Our test setup is as follows:-
- Windows - - Linux - - Windows -
- 2003 srv------- ------- 2003 srv-
10.128.30.2 eth0 10.128.30.1 10.128.32.2 eth1 10.128.32.1
All machines have gigabit NICS, and are connected via a Foundry gigabit switch. The Linux machine we used was an HP Proliant 3.0 GHz Dual Xeon with twin onboard Broadcom NICs using the tigon driver. We had the linux machine configured as a basic router.The linux machine was running knoppix, booted into textonly mode, with nics manually configured (ie only bash and the kernel running, and no iptables). We were doing the tests using netperf. We had applied the tweaks to the NICs from this site
http://www.enterpriseitplanet.com/networking/features/article.php/3497796
If we put the two windows machines onto the same subnet, and using the "Network Utilisation" graph in Task Manger showed we were running at 85-90% util. When we had the linux machine acting as a router between the two machines, this dropped to 35% util. Admittedly this probably isn't the best method.
Using netperf between the windows and linux machine, again gave 85-90% util. This was with both windows machines sending data to both nics of the linux machine at the same time.
We cant understand how the throughput can more than half, when sending between the two windows machines via the linux machine. This system is going to go on a very busy network, so speed is essential.
We had tried the above test with a DELL single processor Xeon with onboard Intel and PCI Realtek gig-nics, and got very similar results. All tests were done with Knoppix 3.9 (2.6.11 kernel). On the production system we'll probably be using Redhat Enterprise.
Anyone got any ideas? Are we missing something in the config?
Use the modified libpcap from Los Alamos laboratories. It was build to capture gigE traffic. Use some filtering on your traffic. Think about what you dont need to analyse. Dont route the data stream, tap the line. Its a hell of a lot more efficient. - Make a tap is easy.
Get four cat5 modular snapin jacks and a bit of 5e cable
1. Take a small length of cable and strip of the outer coating. Seperate the eigth internal wires. Partially assemble the housings by snapping the jacks into place. 2. Number the four ports 1 to 4 from the left and the pins on each 1 to 8 from the left. 3. Orange wire to pin1/port1, through pin6/port2 to pin1/port4 4. Orange wire / White stripe to pin2/port1 through pin3/port2 to pin2/port4 5. Green wire / White Stripe to pin3/port1 through pin3/port3 to pin3/port4 6. Blue Wire / White Stripe to pin4/port1 straight to pin4/port4 7. Blue wire to pin5/port1 straigth to pin5/port4 8. Green wire straing pin6/port1 through pin3/port3 to pin6/port4 9. Brown wire to pin7/port1 straight to pin7/port4 10. Brown wire / white stripe to pin8/port1 straight to pin8/port4
Many Thanks
Chris
Hi Peter,
Thanks for the tips. We set it up as a router initially just to test the theory, we are nowhere near running snort on it yet. We couldn't work out why the throughput would drop by so much though?
When I'm back at work next monday, I'll have a go at the listener idea.
Thanks for your help
Chris
On Tue, 2005-08-30 at 00:13 +0000, Peter D. Bassill wrote:
Chris Glover wrote:
Hi,
Morning
Here's one for the networking gurus on this list :-)
IDS geek do?
At work we are intending on using a Linux machine running snort as an IDS (much cheaper than the Cisco alternative) with two gigabit nics.
We have been doing some throughput tests, and are not convinced this will work, these tests were without installing snort.
Our test setup is as follows:-
- Windows - - Linux - - Windows -
- 2003 srv------- ------- 2003 srv-
10.128.30.2 eth0 10.128.30.1 10.128.32.2 eth1 10.128.32.1
All machines have gigabit NICS, and are connected via a Foundry gigabit switch. The Linux machine we used was an HP Proliant 3.0 GHz Dual Xeon with twin onboard Broadcom NICs using the tigon driver. We had the linux machine configured as a basic router.The linux machine was running knoppix, booted into textonly mode, with nics manually configured (ie only bash and the kernel running, and no iptables). We were doing the tests using netperf. We had applied the tweaks to the NICs from this site
http://www.enterpriseitplanet.com/networking/features/article.php/3497796
If we put the two windows machines onto the same subnet, and using the "Network Utilisation" graph in Task Manger showed we were running at 85-90% util. When we had the linux machine acting as a router between the two machines, this dropped to 35% util. Admittedly this probably isn't the best method.
Using netperf between the windows and linux machine, again gave 85-90% util. This was with both windows machines sending data to both nics of the linux machine at the same time.
We cant understand how the throughput can more than half, when sending between the two windows machines via the linux machine. This system is going to go on a very busy network, so speed is essential.
We had tried the above test with a DELL single processor Xeon with onboard Intel and PCI Realtek gig-nics, and got very similar results. All tests were done with Knoppix 3.9 (2.6.11 kernel). On the production system we'll probably be using Redhat Enterprise.
Anyone got any ideas? Are we missing something in the config?
Use the modified libpcap from Los Alamos laboratories. It was build to capture gigE traffic. Use some filtering on your traffic. Think about what you dont need to analyse. Dont route the data stream, tap the line. Its a hell of a lot more efficient.
- Make a tap is easy.
Get four cat5 modular snapin jacks and a bit of 5e cable
- Take a small length of cable and strip of the outer coating. Seperate
the eigth internal wires. Partially assemble the housings by snapping the jacks into place. 2. Number the four ports 1 to 4 from the left and the pins on each 1 to 8 from the left. 3. Orange wire to pin1/port1, through pin6/port2 to pin1/port4 4. Orange wire / White stripe to pin2/port1 through pin3/port2 to pin2/port4 5. Green wire / White Stripe to pin3/port1 through pin3/port3 to pin3/port4 6. Blue Wire / White Stripe to pin4/port1 straight to pin4/port4 7. Blue wire to pin5/port1 straigth to pin5/port4 8. Green wire straing pin6/port1 through pin3/port3 to pin6/port4 9. Brown wire to pin7/port1 straight to pin7/port4 10. Brown wire / white stripe to pin8/port1 straight to pin8/port4
Many Thanks
Chris
-- This email has been verified as Virus free Virus Protection and more available at http://www.plus.net
On Mon, 2005-08-29 at 20:37 +0100, Chris Glover wrote:
Hi,
Here's one for the networking gurus on this list :-)
I did a fair amount of benchmarking with Linux (needed a very low cost traffic generator).
Performance with a cheap Realtek card wasn't much more than 40Mbps on a 1Ghz athlon (CPU usage at 99 %). On a P4 2.4Ghz with and a 3Com it was possible to saturate a 100 Mbits LAN (CPU usage was around 30 %).
I'm not familair enough to judge the quality of the broadcom cards, but a better quality card maay be worth a try.
It may be less resource intensive to bridge the ports (assuming you can still hook Snort in-line that is).
Hi Adam
On Wed, 2005-08-31 at 19:47 +0100, Mr. Adam Allen. wrote:
I did a fair amount of benchmarking with Linux (needed a very low cost traffic generator).
Performance with a cheap Realtek card wasn't much more than 40Mbps on a 1Ghz athlon (CPU usage at 99 %). On a P4 2.4Ghz with and a 3Com it was possible to saturate a 100 Mbits LAN (CPU usage was around 30 %).
I'm not familair enough to judge the quality of the broadcom cards, but a better quality card maay be worth a try.
I'm hoping they are OK, we were using a brand new dual NIC'd HP Proliant, taken out the box that morning. Broadcom cards cant be all bad, Apple use Broadcom NICs in their G5 machines.
It may be less resource intensive to bridge the ports (assuming you can still hook Snort in-line that is).
I had the same thought this morning, I'll try that out when I get to work on monday.
Thanks
Chris