Close node socket after waiting one hour to go inactive to run exclusive event
After 90 minutes of waiting, we'll do the same abort wait (and run the event anyway), but closing the node's socket should be enough to get the node_thread to terminate and set the node status back to NODE_WFC. Apparently some sysops like to leave their terminals idling (e.g. running MRC) and never disconnect and since they're T-exempt, the BBS won't limit their time online to allow events to run. Exclusive events will wait for all nodes to become inactive, but give up after 90 minutes of waiting and run the event anyway and set node status to WFC at the end. If the node was actually still connected/in-use, this could lead to the (new) critical error messages logged "!Node X status is WFC, but the node socket (N) and thread are still in use!" and other chaos (NODE STATUS FIXUP and the like). This should prevent all that by just abruptly disconnecting the node after waiting 60 minutes for the sysop to gracefully disconnect. The log message when this happens: "!TIRED of waiting for node N to become inactive (status=X), closing socket Y"
parent
fa5877aa
No related branches found
No related tags found