cancel
Showing results for 
Search instead for 
Did you mean: 

Vulnerability scanning caused issues?

Vulnerability scanning caused issues?

T_Pitch
New Contributor III
Hello,

This past weekend we were running a vulnerability scan on our network. We've run these before without issue, however this time one of my switch stacks experienced an issue. Basically the vulnerability scan is a brute force login via SSH and telnet.

Our stack that experienced an issue is a stack of 2 that is managed out-of-band and is EXOS 22.1.1.5. The primary switch (#1) reported the exsshd process died and at 13:14:30.28 the primary node was reported as DOWN. The other switch (#2) stepped in and became the primary.

Questions:
  1. Would have a brute force "attack" resulted in behavior like this? I am concerned because we have a bunch of other EXOS stacks and never had this problem before.
  2. We only have 1 management port connected, it was in the primary switch. When the failure happened it stopped working and had to move it to the backup switch to establish management. While I assume this is expected behavior, would there be any harm in connecting the management port on both switches? Maybe this is best practice...
Other thoughts?

Thanks folks.

code:
13:17:16.67  Slot-1: Input voltage to Internal PSU-2 in slot 2 is on. Output enabled.
13:17:16.67 Slot-1: Internal PSU-2 in slot 2 is present.
13:17:16.67 Slot-1: Input voltage to Internal PSU-1 in slot 2 is on. Output enabled.
13:17:16.67 Slot-1: Internal PSU-1 in slot 2 is present.
13:17:16.67 Slot-1: Input voltage to Internal PSU-2 in slot 1 is on. Output enabled.
13:17:16.67 Slot-1: Internal PSU-2 in slot 1 is present.
13:17:16.64 Slot-1: Input voltage to Internal PSU-1 in slot 1 is on. Output enabled.
13:17:16.64 Slot-1: Internal PSU-1 in slot 1 is present.
13:17:13.61 Slot-1: Backup is in SYNC
13:17:09.58 Slot-1: Port 2:30 is UP with speed 10000, Add to aggregator 1:30 with speed: 10000
13:17:09.58 Slot-1: Port 2:30 link UP at speed 10 Gbps and full-duplex
13:17:09.57 Slot-1: Port 2:2 link UP at speed 1 Gbps and full-duplex
13:17:09.57 Slot-1: Port 2:1 link UP at speed 1 Gbps and full-duplex
13:17:09.56 Slot-1: Port 1:30 is UP with speed 10000, Add to aggregator 1:30 with speed: (down)
13:17:09.56 Slot-1: Port 1:30 link UP at speed 10 Gbps and full-duplex
13:17:09.56 Slot-1: Port 1:29 link UP at speed 1 Gbps and full-duplex
13:17:09.55 Slot-1: Port 1:18 link UP at speed 1 Gbps and full-duplex
13:17:09.54 Slot-1: Port 1:17 link UP at speed 1 Gbps and full-duplex
13:17:09.54 Slot-1: Port 1:11 link UP at speed 100 Mbps and full-duplex
13:17:09.54 Slot-1: Port 1:10 link UP at speed 1 Gbps and full-duplex
13:17:09.54 Slot-1: Port 1:9 link UP at speed 100 Mbps and full-duplex
13:17:09.35 Slot-1: Module in Slot-2 is operational
13:17:06.64 Slot-1: Port Mgmt-1 link UP at speed 1 Gbps and full-duplex
13:17:06.37 Slot-1: Module in Slot-1 is operational
13:17:03.87 Slot-1: Slot-2 being Powered ON
13:17:03.47 Slot-1: Error while loading "ports": Speed change is not allowed on port 1:30 as it is a trunk member port.
13:17:03.18 Slot-1: snmpMaster initialization complete
13:17:02.58 Slot-1: thttpd is not MASTER to checkpoint data
13:17:01.51 Slot-1: System is stable. Change to warm reset mode
13:17:01.47 Slot-1: Closing all active telnet sessions
13:16:59.75 Slot-1: Watchdog enabled
13:16:28.60 Slot-1: Internal PSU-2 in slot 1 is disconnected.
13:16:28.60 Slot-1: Internal PSU-1 in slot 1 is disconnected.
13:16:22.84 Slot-1: Setting time to Sat Jul 13 13:16:22 2019
13:16:22.83 Slot-1: Dropped CM_MSG_CHKP_STANDBY_BANNER_FROM_CFG: Length 16 Peer 80 (primary)
13:16:22.83 Slot-1: Dropped CM_MSG_CHKP_STANDBY_BANNER_ACK: Length 14 Peer 80 (primary)
13:16:22.83 Slot-1: Dropped CM_MSG_CHKP_STANDBY_BANNER: Length 14 Peer 80 (primary)
13:16:22.83 Slot-1: Dropped CM_MSG_CONFIG_COREDUMP_STANDBY: Length 16 Peer 80 (primary)
13:16:22.83 Slot-1: Dropped CM_MSG_CONFIG_STANDBY: Length 396 Peer 80 (primary)
13:16:22.83 Slot-1: Node State[3] = BACKUP
13:16:01.57 Slot-1: NVRAM is full, old messages are overwritten.
13:15:57.59 Slot-1: Setting time to Sat Jul 13 17:15:57 2019
13:15:52.58 Slot-1: Node State[2] = STANDBY
13:15:52.58 Slot-1: Node INIT DONE ....
13:15:25.67 Slot-1: **** telnetd started *****
13:15:25.62 Slot-1: Module in Slot-2 is inserted
13:15:25.29 Slot-1: Transitioning slot 2 to UNKNOWN rather than MASTER
13:15:25.20 Slot-1: DOS protect application started successfully
13:15:25.19 Slot-1: Stacking port 2:2 link up at 10Gbps.
13:15:25.19 Slot-1: Stacking port 2:1 link up at 10Gbps.
13:15:25.19 Slot-1: Stacking port 1:2 link up at 10Gbps.
13:15:25.19 Slot-1: Stacking port 1:1 link up at 10Gbps.
13:15:24.80 Slot-1: **** tftpd started *****
13:15:23.13 Slot-1: snmpMaster process has been restarted.
13:15:23.13 Slot-1: snmpSubagent initialization complete
13:15:23.05 Slot-1: Network Login framework has been initialized
13:15:22.08 Slot-1: Node State[1] = INIT
13:15:22.07 Slot-1: Slot-1 being Powered ON
13:15:21.60 Slot-1: Hal initialization done.
13:15:21.57 Slot-1: telnetd listening on port 23
13:15:21.18 Slot-1: Module in Slot-1 is inserted
13:15:21.16 Slot-1: Internal PSU-2 in slot 1 is powered off.
13:15:21.16 Slot-1: Internal PSU-2 in slot 1 is present.
13:15:21.16 Slot-1: Internal PSU-1 in slot 1 is powered off.
13:15:21.16 Slot-1: Internal PSU-1 in slot 1 is present.
13:15:20.90 Slot-1: Starting hal initialization ....
13:15:20.43 Slot-1: The Node Manager (NM) has started processing.
13:15:20.42 Slot-1: DM started
13:15:20.41 Slot-1: The Event Management System logging server has started.
13:15:20.07 Slot-1: EPM Started
13:15:19.79 Slot-1: Changing to watchdog warm reset mode
13:14:30.28 Slot-2: NM: Old Primary's state is FAIL
13:14:30.28 Slot-2: PRIMARY NODE (Slot-1) DOWN
13:12:20.52 Slot-1: Shutting down all processes
13:12:20.51 Slot-1: Slot-1 FAILED (1) Process Failure
13:12:20.39 Slot-1: Node State[4] = FAIL (Process Failure)
13:12:20.38 Slot-1: Process exsshd Failed
13:12:20.36 Slot-1: Configuration database locked
13:12:20.36 Slot-1: Connection lost with process exsshd
13:11:56.57 Slot-1: 76bad0c8 afb10020 sw s1,32(sp)
13:11:56.57 Slot-1: 76bad0c4 27bdffd8 addiu sp,sp,-40
13:11:56.57 Slot-1: 76bad0c0 00000000 nop
13:11:56.57 Slot-1: 76bad0bc 10c00018 beq a2,zero,0x76bad120
13:11:56.57 Slot-1: 76bad0b8 8c860008 lw a2,8(a0)
13:11:56.57 Slot-1: 76bad0b4 <00c20036>tne a2,v0
13:11:56.57 Slot-1: 76bad0b0 3442ffee ori v0,v0,0xffee
13:11:56.57 Slot-1: 76bad0ac 3c0200c0 lui v0,0xc0
13:11:56.57 Slot-1: 76bad0a8 8c860014 lw a2,20(a0)
13:11:56.14 Slot-1: Code:
13:11:56.14 Slot-1:
13:11:56.14 Slot-1: Process exsshd pid 2422 died with signal 5
2 REPLIES 2

T_Pitch
New Contributor III
It is- I didn't post the entire log but there are 200 or more events like this within a matter of minutes. In my original post I provided just the failure to omit useless data.

code:
13:11:51.80  Slot-1: Login failed for user dhcp through ssh (192.168.1.201)
13:11:51.74 Slot-1: Login failed for user admin through ssh (192.168.1.201)
13:11:51.66 Slot-1: Login failed for user admin through ssh (192.168.1.201)
13:11:46.61 Slot-1: Login failed for user Root through ssh (192.168.1.201)

Croobum
New Contributor
Not looks like a broot attk
GTM-P2G8KFN