link down - Local Fault
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Report Inappropriate Content
‎05-31-2016 02:08 PM
I have been seeing an issue in my virtual environment of randomly losing connectivity to iSCSI LUNs. My hosts are showing a loss of connectivity, I looked at the logs on my X670V-48t and i found what looked to be the same time frame of the port going down stating :
05/22/2016 19:23:28.19 Slot-1: Port 1:21 link down - Local fault
05/22/2016 19:23:57.64 Slot-1: Port 1:21 link UP at speed 10 Gbps and full-duplex
05/22/2016 19:23:58.99 Slot-1: Configuration mismatch detected by DCBX (Baseline v1.01) for the PFC TLV on port 1:21.
05/22/2016 19:23:59.99 Slot-1: Configuration mismatch resolved by DCBX (Baseline v1.01) for the PFC TLV on port 1:21.
I need a little insight on what is possibly going on here.
05/22/2016 19:23:28.19 Slot-1: Port 1:21 link down - Local fault
05/22/2016 19:23:57.64 Slot-1: Port 1:21 link UP at speed 10 Gbps and full-duplex
05/22/2016 19:23:58.99
05/22/2016 19:23:59.99
I need a little insight on what is possibly going on here.
8 REPLIES 8
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Report Inappropriate Content
‎11-09-2016 07:41 PM
Just an FYI for anyone else who see this. I just had this issue with a 10gb connection between a x620 and an x460. I swapped out the SFP+ cable and the problem went away.
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Report Inappropriate Content
‎06-01-2016 11:24 AM
The ports affected are the same set of ports each time.
None of the ports are part of an aggregation.
It happened to each of the ports 3 times in a 2hr period.
XOS version is 15.3.3.5
This was happening to 9 different ports which were attached to 3 different hosts, 3 ports per Host.
Host A - 1:3 mgmt, 2:26 iSCSI, 2:18 VM data (mulitple vlans)
Host B - 1:38 mgmt, 1:21 iSCSI, 2:8 VM data (multiple vlans)
Host C - 2:21 mgmt, 2:14 iSCSI, 1:12 VM data (multiple vlans)
2 of the host, Host A and Host B were affected enough that they needed to be rebooted. Host C for some reason did not show any signs of distress.
Here is the full log from the time frame:
None of the ports are part of an aggregation.
It happened to each of the ports 3 times in a 2hr period.
XOS version is 15.3.3.5
This was happening to 9 different ports which were attached to 3 different hosts, 3 ports per Host.
Host A - 1:3 mgmt, 2:26 iSCSI, 2:18 VM data (mulitple vlans)
Host B - 1:38 mgmt, 1:21 iSCSI, 2:8 VM data (multiple vlans)
Host C - 2:21 mgmt, 2:14 iSCSI, 1:12 VM data (multiple vlans)
2 of the host, Host A and Host B were affected enough that they needed to be rebooted. Host C for some reason did not show any signs of distress.
Here is the full log from the time frame:
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Report Inappropriate Content
‎06-01-2016 11:24 AM
The ports affected are the same set of ports each time.
None of the ports are part of an aggregation.
It happened to each of the ports 3 times in a 2hr period.
XOS version is 15.3.3.5
This was happening to 9 different ports which were attached to 3 different hosts, 3 ports per Host.
Host A - 1:3 mgmt, 2:26 iSCSI, 2:18 VM data (mulitple vlans)
Host B - 1:38 mgmt, 1:21 iSCSI, 2:8 VM data (multiple vlans)
Host C - 2:21 mgmt, 2:14 iSCSI, 1:12 VM data (multiple vlans)
2 of the host, Host A and Host B were affected enough that they needed to be rebooted. Host C for some reason did not show any signs of distress.
Here is the full log from the time frame:
None of the ports are part of an aggregation.
It happened to each of the ports 3 times in a 2hr period.
XOS version is 15.3.3.5
This was happening to 9 different ports which were attached to 3 different hosts, 3 ports per Host.
Host A - 1:3 mgmt, 2:26 iSCSI, 2:18 VM data (mulitple vlans)
Host B - 1:38 mgmt, 1:21 iSCSI, 2:8 VM data (multiple vlans)
Host C - 2:21 mgmt, 2:14 iSCSI, 1:12 VM data (multiple vlans)
2 of the host, Host A and Host B were affected enough that they needed to be rebooted. Host C for some reason did not show any signs of distress.
Here is the full log from the time frame:
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Report Inappropriate Content
‎06-01-2016 11:24 AM
Configure the SNMPv3 commands below in an EXOS switch and add this switch using 'snmp_v3_profile' in NetSight.
# configure snmpv3 add user snmpuser authentication md5 snmpauthcred privacy des snmpprivcred
# configure snmpv3 add group snmpgroup user snmpuser sec-model usm
# configure snmpv3 add access snmpgroup sec-level priv read-view defaultAdminView write-view defaultAdminView notify-view defaultNotifyView
# configure snmpv3 add user snmpuser authentication md5 snmpauthcred privacy des snmpprivcred
# configure snmpv3 add group snmpgroup user snmpuser sec-model usm
# configure snmpv3 add access snmpgroup sec-level priv read-view defaultAdminView write-view defaultAdminView notify-view defaultNotifyView
