Topics

A big thanks to everyone that weighed in on the somewhat recent 10GB Ethernet discussion


Sam Lysinger
 

I had considered going 10gb at the shop about 5 years ago, but it was
going to run about $7500 and that was not where I wanted to spend my
money. Now that is is affordable, I have taken the plunge and between
the AHCS and a Cisco engineer friend of mine, I spent a few bucks and
a day and got my switch configured and tested with a 10gb intel card
on a PC and then moved the card to my ESXi server that supports dot1Q
trunking natively. The speed boost is only noticeable when accessing
VMs, but that is a huge time saver and worth every dime.

thanks!!!.


Pete Rittwage
 

Yeah, 10G is Gen1 and cheap now and a big difference to regular GigE.

25G is Gen2 and that stuff should be cheap now or soon also... Followed by
40G, 100G and now 400G...

It's nice, but your bottlenecks move and cause other problems. Some things
were never designed to be overburdened, like your storage controllers and
even PCIe...

I had considered going 10gb at the shop about 5 years ago, but it was
going to run about $7500 and that was not where I wanted to spend my
money. Now that is is affordable, I have taken the plunge and between
the AHCS and a Cisco engineer friend of mine, I spent a few bucks and
a day and got my switch configured and tested with a 10gb intel card
on a PC and then moved the card to my ESXi server that supports dot1Q
trunking natively. The speed boost is only noticeable when accessing
VMs, but that is a huge time saver and worth every dime.

thanks!!!.



Sam Lysinger
 

Luckily my ESXi server runs idle 99% of the time and when it runs, it
is doing something automated so I won't notice the next bottleneck for
a while (as I probably won't pay attention) ^_^ I do admit that RAID
5 on 4 spindles was not the best of plans for performance, but 500gb
SSDs now cost about 50 bucks less than what I paid for the 500GB
drives I put in there.

I remember the jump from 10mb Ethernet to 100mb fast Ethernet, we were
cooking with gas back then...

On 11/20/18, Pete Rittwage <peter@rittwage.com> wrote:
Yeah, 10G is Gen1 and cheap now and a big difference to regular GigE.

25G is Gen2 and that stuff should be cheap now or soon also... Followed by
40G, 100G and now 400G...

It's nice, but your bottlenecks move and cause other problems. Some things
were never designed to be overburdened, like your storage controllers and
even PCIe...



I had considered going 10gb at the shop about 5 years ago, but it was
going to run about $7500 and that was not where I wanted to spend my
money. Now that is is affordable, I have taken the plunge and between
the AHCS and a Cisco engineer friend of mine, I spent a few bucks and
a day and got my switch configured and tested with a 10gb intel card
on a PC and then moved the card to my ESXi server that supports dot1Q
trunking natively. The speed boost is only noticeable when accessing
VMs, but that is a huge time saver and worth every dime.

thanks!!!.






DavidKuder
 

RAID-6 on 32 spinners isn’t any better speed wise I’m afraid. At least I can survive a quarter of my drives going out in quick succession.

On Nov 20, 2018, at 1:35 PM, Sam Lysinger <sam.lysinger@gmail.com> wrote:

Luckily my ESXi server runs idle 99% of the time and when it runs, it
is doing something automated so I won't notice the next bottleneck for
a while (as I probably won't pay attention) ^_^ I do admit that RAID
5 on 4 spindles was not the best of plans for performance, but 500gb
SSDs now cost about 50 bucks less than what I paid for the 500GB
drives I put in there.

I remember the jump from 10mb Ethernet to 100mb fast Ethernet, we were
cooking with gas back then...

On 11/20/18, Pete Rittwage <peter@rittwage.com> wrote:
Yeah, 10G is Gen1 and cheap now and a big difference to regular GigE.

25G is Gen2 and that stuff should be cheap now or soon also... Followed by
40G, 100G and now 400G...

It's nice, but your bottlenecks move and cause other problems. Some things
were never designed to be overburdened, like your storage controllers and
even PCIe...



I had considered going 10gb at the shop about 5 years ago, but it was
going to run about $7500 and that was not where I wanted to spend my
money. Now that is is affordable, I have taken the plunge and between
the AHCS and a Cisco engineer friend of mine, I spent a few bucks and
a day and got my switch configured and tested with a 10gb intel card
on a PC and then moved the card to my ESXi server that supports dot1Q
trunking natively. The speed boost is only noticeable when accessing
VMs, but that is a huge time saver and worth every dime.

thanks!!!.







alan@alanlee.org
 

I could actually use some help with 10G Ethernet. I have speed issues I cannot resolve.

Machine #1 is an i9/12 core with 8x 1TB SATA3 SSDs on an LSI 93xx series 8x PCIe RAID controller in RAID6(6+2). I get about ~2GBytes/s write and ~3Bytes/s read. The motherboard has an integrated 10GB Ethernet port.
It also has a single nvme SSD (Samsung EVO - 4 lane PCIe3) that gets ~2GBytes/s read. Runs Windows 10 Pro.

Machine #2 is an i7/6 core with a nvme 4 lane PCIe3 module (~2GBytes/s read) and 8x 10 TB hard drives on the built-in SATA ports using Linux soft RAID5. I get about ~1.5 GBytes/s read on the RAID volume. I have a 8x lane PCIe 10GBit NIC installed in a socket with 8 lane access.

I have a Netgear WNR2000v5 10 GB/s Ethernet switch between them with CAT7 cabling to the pair of copper 10 GBit ports.

I've tried just about everything and can only get about 40 MBytes/s transfer between them. I've tried SMB, NFS (2, 3, and 4), unencrypted rsync, ... you name it protocol wise. I've tried jumbo frames, regular frames, setting the MTU from 1500 all the way to 100000. I've tried about three different driver versions on each NIC and three different switch firmware versions. I've tried nvme to nvme, even /dev/null to /dev/null. Nothing seems to work.

I don't have any other equipment that is 10 GBit/s to add a third test point to pin-point the problem. The only other thing I can do locally is just put a cross over in between but I haven't had a lull to reconfigure the network so I can without a huge disruption for a day.

Any advice?

-Alan

On 2018-11-20 14:23, noobacide@gmail.com wrote:
RAID-6 on 32 spinners isn’t any better speed wise I’m afraid. At
least I can survive a quarter of my drives going out in quick
succession.
On Nov 20, 2018, at 1:35 PM, Sam Lysinger <sam.lysinger@gmail.com> wrote:
Luckily my ESXi server runs idle 99% of the time and when it runs, it
is doing something automated so I won't notice the next bottleneck for
a while (as I probably won't pay attention) ^_^ I do admit that RAID
5 on 4 spindles was not the best of plans for performance, but 500gb
SSDs now cost about 50 bucks less than what I paid for the 500GB
drives I put in there.
I remember the jump from 10mb Ethernet to 100mb fast Ethernet, we were
cooking with gas back then...

On 11/20/18, Pete Rittwage <peter@rittwage.com> wrote:
Yeah, 10G is Gen1 and cheap now and a big difference to regular GigE.
25G is Gen2 and that stuff should be cheap now or soon also... Followed by
40G, 100G and now 400G...
It's nice, but your bottlenecks move and cause other problems. Some things
were never designed to be overburdened, like your storage controllers and
even PCIe...

I had considered going 10gb at the shop about 5 years ago, but it was
going to run about $7500 and that was not where I wanted to spend my
money. Now that is is affordable, I have taken the plunge and between
the AHCS and a Cisco engineer friend of mine, I spent a few bucks and
a day and got my switch configured and tested with a 10gb intel card
on a PC and then moved the card to my ESXi server that supports dot1Q
trunking natively. The speed boost is only noticeable when accessing
VMs, but that is a huge time saver and worth every dime.
thanks!!!.


Pete Rittwage
 

Hi Alan,

Wow, that is bad... That is bad even for 1G Ethernet.

Without troubleshooting and just taking a shot in the dark, I would try
your suggestion about removing the switch to see if that changes anything.

I am probably biased, though, because when I deal with it I never use
consumer-grade equipment like Netgear for this. It may be fine, I don't
know. When I look up that model, it's just a little home Wifi router, so
maybe a typo?

-Pete


I could actually use some help with 10G Ethernet. I have speed issues I
cannot resolve.

Machine #1 is an i9/12 core with 8x 1TB SATA3 SSDs on an LSI 93xx series
8x PCIe RAID controller in RAID6(6+2). I get about ~2GBytes/s write and
~3Bytes/s read. The motherboard has an integrated 10GB Ethernet port.
It also has a single nvme SSD (Samsung EVO - 4 lane PCIe3) that gets
~2GBytes/s read. Runs Windows 10 Pro.

Machine #2 is an i7/6 core with a nvme 4 lane PCIe3 module (~2GBytes/s
read) and 8x 10 TB hard drives on the built-in SATA ports using Linux
soft RAID5. I get about ~1.5 GBytes/s read on the RAID volume. I have
a 8x lane PCIe 10GBit NIC installed in a socket with 8 lane access.

I have a Netgear WNR2000v5 10 GB/s Ethernet switch between them with
CAT7 cabling to the pair of copper 10 GBit ports.

I've tried just about everything and can only get about 40 MBytes/s
transfer between them. I've tried SMB, NFS (2, 3, and 4), unencrypted
rsync, ... you name it protocol wise. I've tried jumbo frames, regular
frames, setting the MTU from 1500 all the way to 100000. I've tried
about three different driver versions on each NIC and three different
switch firmware versions. I've tried nvme to nvme, even /dev/null to
/dev/null. Nothing seems to work.

I don't have any other equipment that is 10 GBit/s to add a third test
point to pin-point the problem. The only other thing I can do locally
is just put a cross over in between but I haven't had a lull to
reconfigure the network so I can without a huge disruption for a day.

Any advice?

-Alan

On 2018-11-20 14:23, noobacide@gmail.com wrote:
RAID-6 on 32 spinners isn’t any better speed wise I’m afraid. At
least I can survive a quarter of my drives going out in quick
succession.
On Nov 20, 2018, at 1:35 PM, Sam Lysinger <sam.lysinger@gmail.com>
wrote:

Luckily my ESXi server runs idle 99% of the time and when it runs, it
is doing something automated so I won't notice the next bottleneck for
a while (as I probably won't pay attention) ^_^ I do admit that RAID
5 on 4 spindles was not the best of plans for performance, but 500gb
SSDs now cost about 50 bucks less than what I paid for the 500GB
drives I put in there.

I remember the jump from 10mb Ethernet to 100mb fast Ethernet, we were
cooking with gas back then...

On 11/20/18, Pete Rittwage <peter@rittwage.com> wrote:
Yeah, 10G is Gen1 and cheap now and a big difference to regular GigE.

25G is Gen2 and that stuff should be cheap now or soon also...
Followed by
40G, 100G and now 400G...

It's nice, but your bottlenecks move and cause other problems. Some
things
were never designed to be overburdened, like your storage controllers
and
even PCIe...



I had considered going 10gb at the shop about 5 years ago, but it
was
going to run about $7500 and that was not where I wanted to spend my
money. Now that is is affordable, I have taken the plunge and
between
the AHCS and a Cisco engineer friend of mine, I spent a few bucks
and
a day and got my switch configured and tested with a 10gb intel card
on a PC and then moved the card to my ESXi server that supports
dot1Q
trunking natively. The speed boost is only noticeable when
accessing
VMs, but that is a huge time saver and worth every dime.

thanks!!!.









alan@alanlee.org
 

Yes, wrong IP. It is a GS110EMX

On 2018-11-20 15:21, Pete Rittwage wrote:
Hi Alan,
Wow, that is bad... That is bad even for 1G Ethernet.
Without troubleshooting and just taking a shot in the dark, I would try
your suggestion about removing the switch to see if that changes anything.
I am probably biased, though, because when I deal with it I never use
consumer-grade equipment like Netgear for this. It may be fine, I don't
know. When I look up that model, it's just a little home Wifi router, so
maybe a typo?
-Pete