Mirlyn
Well-Known Member
I posted this over at HWC, but I figure I should try my luck anywhere...
I'm using a stack of 3300 series 3com managed switches on the network. There are several runs from the stack to one of our larger rooms, which is then split off using two 16-port OfficeConnects, also from 3com. These 16-port switches were installed about a week ago and we've noticed a tremendous problem with timeouts. Pinging a machine elsewhere on the stack (and not in the room) will sometimes end up with as much as 40% packet loss. Before installing these new 16-port switches, we had five 8-port DLinks in the room. Timeouts were never a problem, if they even existed.
Now machines in this room are losing complete connectivity, which causes SSH sessions and domain logins to hang for a few minutes or permanently freeze. I originally thought it was the new 3coms. I moved them to another temporary network and got the same severe timeouts. For further testing, I tried the original DLink switches on this temporary network and got timeouts as well, though to a lesser degree.
I've ran new cable around the room and still get timeouts. I changed the stack settings to 100 Full Duplex, 100 Half Duplex, and even tried enabling/disabling flow control for both FD and HD. No success. The results for each setting showed 30-40% packet loss on the 3com 16-port switches and 10-20% loss on the DLink switches.
I'm thinking it might be the cable from this room to the stack (original cat3 from building in 1992). I forced everything to run at 10 FD and HD and got better results. However, why would everything go to hell when the only thing that changed were the Dlink-3com switch replacement? Surely if the cable couldn't support 100, we would have had this problem long before when the 100M Dlinks were installed. Plus, the Dlinks ran at FD. Anyway, the original cable is still being used all over here at longer distances with very little problems. We even had gigabit running on it at one time, which makes me think its not the cable from the room to the stack.
I'm stumped. Any ideas? I've read that flow control should be disabled when going to hubs, but these are all switches. When I disabled flow control, I still didn't get any better results.
I've updated the firmware on the stack to 2.70 and still have the same problem. I took one of the new 16 port switches down to the stack and plugged it in to a run that goes to another 16-port in the lab. No problems. Pinging went fine, no timeouts. It's almost like the older 3com stuff doesn't like the newer stuff.
When the stack is set to 100FD, the uplink port on the 16-port will turn on and off irregularly, like its trying to negotiate the speed.
I'm stumped. We're going to call 3com in the morning and see what they think, but was wondering if I could be missing something. Any suggestions? The bang-head-on-wall method is starting to leave a mark.
I'm using a stack of 3300 series 3com managed switches on the network. There are several runs from the stack to one of our larger rooms, which is then split off using two 16-port OfficeConnects, also from 3com. These 16-port switches were installed about a week ago and we've noticed a tremendous problem with timeouts. Pinging a machine elsewhere on the stack (and not in the room) will sometimes end up with as much as 40% packet loss. Before installing these new 16-port switches, we had five 8-port DLinks in the room. Timeouts were never a problem, if they even existed.
Now machines in this room are losing complete connectivity, which causes SSH sessions and domain logins to hang for a few minutes or permanently freeze. I originally thought it was the new 3coms. I moved them to another temporary network and got the same severe timeouts. For further testing, I tried the original DLink switches on this temporary network and got timeouts as well, though to a lesser degree.
I've ran new cable around the room and still get timeouts. I changed the stack settings to 100 Full Duplex, 100 Half Duplex, and even tried enabling/disabling flow control for both FD and HD. No success. The results for each setting showed 30-40% packet loss on the 3com 16-port switches and 10-20% loss on the DLink switches.
I'm thinking it might be the cable from this room to the stack (original cat3 from building in 1992). I forced everything to run at 10 FD and HD and got better results. However, why would everything go to hell when the only thing that changed were the Dlink-3com switch replacement? Surely if the cable couldn't support 100, we would have had this problem long before when the 100M Dlinks were installed. Plus, the Dlinks ran at FD. Anyway, the original cable is still being used all over here at longer distances with very little problems. We even had gigabit running on it at one time, which makes me think its not the cable from the room to the stack.
I'm stumped. Any ideas? I've read that flow control should be disabled when going to hubs, but these are all switches. When I disabled flow control, I still didn't get any better results.
I've updated the firmware on the stack to 2.70 and still have the same problem. I took one of the new 16 port switches down to the stack and plugged it in to a run that goes to another 16-port in the lab. No problems. Pinging went fine, no timeouts. It's almost like the older 3com stuff doesn't like the newer stuff.
When the stack is set to 100FD, the uplink port on the 16-port will turn on and off irregularly, like its trying to negotiate the speed.
I'm stumped. We're going to call 3com in the morning and see what they think, but was wondering if I could be missing something. Any suggestions? The bang-head-on-wall method is starting to leave a mark.
