VMWARE-NSX-MANAGER-MIB: View SNMP OID List / Download MIB

VENDOR: VMWARE INC.


 Home MIB: VMWARE-NSX-MANAGER-MIB
Download as:   

Download standard MIB format if you are planning to load a MIB file into some system (OS, Zabbix, PRTG ...) or view it with a MIB browser. CSV is more suitable for analyzing and viewing OID' and other MIB objects in excel. JSON and YAML formats are usually used in programing even though some systems can use MIB in YAML format (like Logstash).
Keep in mind that standard MIB files can be successfully loaded by systems and programs only if all the required MIB's from the "Imports" section are already loaded.
The tree-like SNMP object navigator requires no explanations because it is very simple to use. And if you stumbled on this MIB from Google note that you can always go back to the home page if you need to perform another MIB or OID lookup.


Object Name OID Type Access Info
 vmwNsxManagerMIB 1.3.6.1.4.1.6876.90.1
This MIB file contains the information that the receiving party needs in order to interpret SNMP traps sent by NSX Manager. VMware NSX for vSphere is a key product in the SDDC architecture. With NSX, virtualization delivers for networking what it has already delivered for compute and storage. In much the same way that server virtualization programmatically creates, snapshots, deletes and restores software-based virtual machines (VMs), NSX network virtualization programmatically creates, snapshots, deletes, and restores software-based virtual networks. The result is a completely transformative approach to networking that not only enables data center managers to achieve orders of magnitude better agility and economics, but also allows for a vastly simplified operational model for the underlying physical network. With the ability to be deployed on any IP network, including both existing traditional networking models and next-generation fabric architectures from any vendor, NSX is a completely non-disruptive solution. In fact, with NSX, the physical network infrastructure you already have is all you need to deploy a software-defined data center. The NSX Manager provides the graphical user interface (GUI) and the REST APIs for creating, configuring, and monitoring NSX components, such as controllers, logical switches, and edge services gateways. The NSX Manager provides an aggregated system view and is the centralized network management component of NSX. NSX Manager is installed as a virtual appliance on any ESX host in your vCenter environment. Support requests can be filed with VMware using KB article: http://kb.vmware.com/kb/2006985 To reach NSX Manager Service Composer UI, login to vSphere UI(https://)->Networking & Security->Service Composer
     vmwNsxMAlertData 1.3.6.1.4.1.6876.90.1.1
This members of this group are the OIDs for VarBinds that contain data for ALL Alerts.
         vmwNsxMEventCode 1.3.6.1.4.1.6876.90.1.1.1 integer32 no-access
The event code of the alert that was generated. To fetch a list of all the events with their code, severity and description please invoke the nsx-manager url https:///api/2.0/systemevent/eventcode . The event code specifically identifies each individual event type. This event code is uniquely assigned only once to a particular event type.
         vmwNsxMEventTimestamp 1.3.6.1.4.1.6876.90.1.1.2 dateandtime no-access
The timestamp when the event was raised in the NSX Manager.
         vmwNsxMEventMessage 1.3.6.1.4.1.6876.90.1.1.3 octet string no-access
This object provides a human readable description of the event or group of events
         vmwNsxMEventSeverity 1.3.6.1.4.1.6876.90.1.1.4 vmwnsxmanagertypeseverity no-access
The severity for the event that was generated. The severity is pre-defined and can only be changed from the NSX Manager section of vsphere web client if the administrator so wishes.
         vmwNsxMEventComponent 1.3.6.1.4.1.6876.90.1.1.5 octet string no-access
The NSX manager component where this event was generated.
         vmwNsxMUuid 1.3.6.1.4.1.6876.90.1.1.6 uuid no-access
The NSX manager UUID where this event was generated.
         vmwNsxMCount 1.3.6.1.4.1.6876.90.1.1.7 integer32 no-access
The count of the number of events for a particular group raised in the last 5 minute interval.
     vmwNsxMNotification 1.3.6.1.4.1.6876.90.1.2
All notifications for NSX Manager use this oid prefix.
         vmwNsxMBranch 1.3.6.1.4.1.6876.90.1.2.0
Branch segregated out for various groups and other future requirements.
             vmwNsxMGroupsBranch 1.3.6.1.4.1.6876.90.1.2.0.1
Grouped Notifications will have this OID prefix.
                 vmwNsxMGroupsPrefix 1.3.6.1.4.1.6876.90.1.2.0.1.0
Prefix added to place zero in penultimate sub-identifier of group oids.
                     vmwNsxMConfigGroup 1.3.6.1.4.1.6876.90.1.2.0.1.0.1
Configuration notifications that are grouped will have this OID prefix.
         vmwNsxMSnmp 1.3.6.1.4.1.6876.90.1.2.1
Notifications that are Snmp related will have this OID prefix.
             vmwNsxMSnmpPrefix 1.3.6.1.4.1.6876.90.1.2.1.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for Snmp module.
                 vmwNsxMSnmpDisabled 1.3.6.1.4.1.6876.90.1.2.1.0.1
This notification is sent when the sending out of Snmp traps is disabled. This would most likely be the last Snmp trap the snmp manager receives. You may some times not receive it in case of high volume of traps. In those cases you can rely on the heartbeat traps not being sent out. Action required: None. If the sending of Snmp traps is enabled a warmStart trap is received. Frequency of traps: Once, whenever the sending snmp traps is disabled.
                 vmwNsxMSnmpManagerConfigUpdated 1.3.6.1.4.1.6876.90.1.2.1.0.2
This notification is sent when the snmp manager configuration has been updated. The event message will carry the semicolon separated new snmp managers' details. Action required: None Frequency of traps: Once, whenever the Snmp manager configuration is updated.
         vmwNsxMSecurity 1.3.6.1.4.1.6876.90.1.2.2
Notifications that are security related will have this OID prefix.
             vmwNsxMSecurityPrefix 1.3.6.1.4.1.6876.90.1.2.2.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for security module.
                 vmwNsxMIpAddedBlackList 1.3.6.1.4.1.6876.90.1.2.2.0.1
Whenever user authentication fails for number of times that user is blacklisted and further login attempts are disabled for that user from given IP address for some time. Action required: None Frequency of traps: Whenever user authentication fails consecutively within some time.
                 vmwNsxMIpRemovedBlackList 1.3.6.1.4.1.6876.90.1.2.2.0.2
After user is blacklisted, after blacklist duration expires, user is removed from blacklist. Action required: None Frequency of traps: Whenever blacklist duration expires for any user.
                 vmwNsxMSsoConfigFailure 1.3.6.1.4.1.6876.90.1.2.2.0.3
Whenever configuration of lookup service / SSO fails due to various reasons like invalid credentials, invalid configuration, time sync problem etc. Action required: Check the event message and reconfigure lookup service with correct details. Frequency of traps: Once per failed configuration of lookup service.
                 vmwNsxMSsoUnconfigured 1.3.6.1.4.1.6876.90.1.2.2.0.4
Whenever user unconfigures lookup service. Action required: None Frequency of traps: Once per unconfiguration event of lookup service.
                 vmwNsxMUserRoleAssigned 1.3.6.1.4.1.6876.90.1.2.2.0.5
When role is assigned on NSX manager for vCenter user. Action required: None Frequency of traps: Once for each user who is assigned role.
                 vmwNsxMUserRoleUnassigned 1.3.6.1.4.1.6876.90.1.2.2.0.6
When role is unassigned on NSX manager for vCenter user. Action: None Frequency of traps: Once for each user where role is removed.
                 vmwNsxMGroupRoleAssigned 1.3.6.1.4.1.6876.90.1.2.2.0.7
When role is assigned on NSX manager for vCenter group. Action required: None Frequency of traps: Once for each group who is assigned role.
                 vmwNsxMGroupRoleUnassigned 1.3.6.1.4.1.6876.90.1.2.2.0.8
When role is unassigned on NSX manager for vCenter group. Action required: None Frequency of traps: Once for each group where role is removed.
                 vmwNsxMVcLoginFailed 1.3.6.1.4.1.6876.90.1.2.2.0.9
Whenever Connection with vCenter starts failing due to invalid credentials. Action required: Reconfigure NSX Manager vCenter configuration with correct credentials.
                 vmwNsxMVcDisconnected 1.3.6.1.4.1.6876.90.1.2.2.0.10
Whenever there is disconnectivity for default VCenter Connection maintained by NSX. Action required: Administrator needs to check the connectivity with vCenter for network problems or any other reasons.
                 vmwNsxMLostVcConnectivity 1.3.6.1.4.1.6876.90.1.2.2.0.11
Whenever there is disconnectivity for default VCenter Connection maintained by NSX. Action required: Administrator needs to check the connectivity with vCenter for network problems or any other reasons.
                 vmwNsxMSsoDisconnected 1.3.6.1.4.1.6876.90.1.2.2.0.12
Whenever there is disconnection with SSO lookup service. Action required: Please check the configuration for possible disconnection reasons like Invalid Credentials, Time sync issues, Network connectivity problems etc. Navigate to Appliance management Web UI in browser (https:///) traverse to Manage vCenter Registration tab and verify the configuration for SSO Lookupservice. Frequency of traps: Once per disconnect event, default frequency to check SSO connection state is 1 hour.
         vmwNsxMFirewall 1.3.6.1.4.1.6876.90.1.2.3
Notifications that are firewall related will have this OID prefix.
             vmwNsxMFirewallPrefix 1.3.6.1.4.1.6876.90.1.2.3.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for firewall module.
                 vmwNsxMFltrCnfgUpdateFailed 1.3.6.1.4.1.6876.90.1.2.3.0.1
NSX Manager failed to enforce DFW. VMs on this host may not be protected by the DFW. Contextual data provided with this event may indicate the cause of this failure. This could happen if the VIB version mismatches on the NSX Manager and ESX host. This may happen during an upgrade. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMFltrCnfgNotAppliedToVnic 1.3.6.1.4.1.6876.90.1.2.3.0.2
NSX Manager failed to enforce DFW configuration on a vnic. This particular VM may not be protected by the DFW. Contextual data provided with this event may indicate the cause of this failure.This could happen if the VIB version mismatches on the NSX Manager and ESX host. This may happen during an upgrade. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMFltrCnfgAppliedToVnic 1.3.6.1.4.1.6876.90.1.2.3.0.3
Successfully updated filter config. Action required: None
                 vmwNsxMFltrCreatedForVnic 1.3.6.1.4.1.6876.90.1.2.3.0.4
Filter created. DFW is enforced in the datapath for the vnic. Action required: None
                 vmwNsxMFltrDeletedForVnic 1.3.6.1.4.1.6876.90.1.2.3.0.5
Filter deleted. DFW is removed from the vnic. Action required: None
                 vmwNsxMFirewallConfigUpdateFailed 1.3.6.1.4.1.6876.90.1.2.3.0.6
Firewall rule Configuration between the NSX Manager and the host is not in sync. Contextual data provided with this event may indicate the cause of this failure. Verify that the host in question was properly prepared by NSX Manager. Collect error logs (vsfwd.log) when the host received firewall config. Force sync firewall config using ForceSync API/UI. See kb.vmware.com/kb/2125437 . If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMFirewallRuleFailedVnic 1.3.6.1.4.1.6876.90.1.2.3.0.7
Failed to apply Distributed Firewall configuration. Contextual data provided with this event may indicate the cause of this failure. Collect error logs (vmkernel.log) when the firewall configuration was applied to the vnic. vsip kernel heaps may not have enough free memory. Check VSFWD logs . See kb.vmware.com/kb/2125437. If the issue persists, please collect the ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMFirewallRuleAppliedVnic 1.3.6.1.4.1.6876.90.1.2.3.0.8
Applied firewall config. Key value will have context info like generation number and also other debugging info. Action required: None
                 vmwNsxMCntnrCnfgUpdateFailed 1.3.6.1.4.1.6876.90.1.2.3.0.9
Failed receive, parse or update the container configuration. Contextual data provided with this event may indicate the cause of this failure. Collect error logs (vmkernel.log) when firewall configuration was applied to the vnic. Verify that vsip kernel heaps have enough free memory. Check VSFWD logs. See kb.vmware.com/kb/2125437 . If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMFlowMissed 1.3.6.1.4.1.6876.90.1.2.3.0.10
Flow missed. Contextual data provided with this event may indicate the cause of this failure. Collect error logs (vmkernel.log) when firewall configuration was applied to the vnic. Verify that vsip kernel heaps have enough free memory and vsfwd memory consumption is within resource limits. Check VSFWD logs. See kb.vmware.com/kb/2125437. If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMSpoofGuardCnfgUpdateFailed 1.3.6.1.4.1.6876.90.1.2.3.0.11
Failed to receive, parse or Update the spoofguard configuration. Contextual data provided with this event may indicate the cause of this failure. Verify that the host in question was properly prepared by NSX Manager. Collect error logs (vmkernel.log) when the spoofguard configuration was applied to the host. For Sync the firewall configuration . See kb.vmware.com/kb/2125437.
                 vmwNsxMSpoofGuardFailed 1.3.6.1.4.1.6876.90.1.2.3.0.12
Failed to apply spoofguard to the vnic. Contextual data provided with this event may indicate the cause of this failure. Verify that vsip kernel heaps have enough free memory. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMSpoofGuardApplied 1.3.6.1.4.1.6876.90.1.2.3.0.13
Enabled spoofguard for vnic. Action required: None
                 vmwNsxMSpoofGuardDisableFail 1.3.6.1.4.1.6876.90.1.2.3.0.14
Failed to disable spoofguard on the vnic. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMSpoofGuardDisabled 1.3.6.1.4.1.6876.90.1.2.3.0.15
Disabled spoofguard for vnic. Action required: None
                 vmwNsxMLegacyAppServiceDeletionFailed 1.3.6.1.4.1.6876.90.1.2.3.0.16
A notification generated when legacy application service VM deletion failed.
                 vmwNsxMFirewallCpuThresholdCrossed 1.3.6.1.4.1.6876.90.1.2.3.0.17
vsfwd CPU usage threshold was exceeded. Reduce the amount of traffic of VMs on the host in question.
                 vmwNsxMFirewallMemThresholdCrossed 1.3.6.1.4.1.6876.90.1.2.3.0.18
vsfwd memory threshold exceeded. Reduce the number of of VMs on the host in question, reduce the number of rules or containers in firewall config. Use appliedTo feature to limit the number of rules for the current cluster.
                 vmwNsxMConnPerSecThrshldCrossed 1.3.6.1.4.1.6876.90.1.2.3.0.19
vsfwd Connectons Per Second (CPS) threshold exceeded. Reduce the amount of new connections of VMs on the host in question.
                 vmwNsxMFirewallCnfgUpdateTimedOut 1.3.6.1.4.1.6876.90.1.2.3.0.20
NSX Manager waits for 2 minutes after publishing the Firewall configuration to each host in the cluster. If a host takes more than 2 minutes to process the data, it times out. Please check the Host in question. See if VSFWD is functioning or not. Also use CLI commands to verify if the rule realization is working properly or not. See kb.vmware.com/kb/2125437. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMSpoofGuardCnfgUpdateTmOut 1.3.6.1.4.1.6876.90.1.2.3.0.21
NSX Manager waits for 2 minutes after publishing the Spoofguard configuration to each host in the cluster. If a host takes more than 2 minutes to process the data, it times out. Please check the Host in question. See if VSFWD is functioning or not. Also use CLI commands to verify if the rule realization is working properly or not. See kb.vmware.com/kb/2125437. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMFirewallPublishFailed 1.3.6.1.4.1.6876.90.1.2.3.0.22
Firewall Configuration Publishing has failed for a given cluster/host. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMCntnrUpdatePublishFailed 1.3.6.1.4.1.6876.90.1.2.3.0.23
Publishing of container (IP/MAC/vNIC) update pdate failed for a given host/cluster object. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMSpoofGuardUpdatePublishFailed 1.3.6.1.4.1.6876.90.1.2.3.0.24
The publishing of the spoofguard updates on this host has failed. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMExcludeListPublishFailed 1.3.6.1.4.1.6876.90.1.2.3.0.25
The publishing of the exclude list or updates to the exclude list on this host has failed. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMFirewallCnfgUpdateOnDltCntnr 1.3.6.1.4.1.6876.90.1.2.3.0.26
Deletion of the object referenced in firewall rules. Action required: Go to the NSX manager DFW UI. All the invalid reference are marked invalid on the UI as well. Please remove the orphaned referenced and update the firewall rules.
                 vmwNsxMHostSyncFailed 1.3.6.1.4.1.6876.90.1.2.3.0.27
Host-level force synchronization has failed. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMHostSynced 1.3.6.1.4.1.6876.90.1.2.3.0.28
Force Sync operation for host succeeded. Action required: None
                 vmwNsxMFirewallInstalled 1.3.6.1.4.1.6876.90.1.2.3.0.29
The Distributed Firewall was successfully Installed on the host.
                 vmwNsxMFirewallInstallFailed 1.3.6.1.4.1.6876.90.1.2.3.0.30
The Distributed Firewall Installation has failed. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMFirewallClusterInstalled 1.3.6.1.4.1.6876.90.1.2.3.0.31
The Distributed Firewall has been installed at the request of a user.
                 vmwNsxMFirewallClusterUninstalled 1.3.6.1.4.1.6876.90.1.2.3.0.32
The Distributed Firewall has been uninstalled at the request of a user.
                 vmwNsxMFirewallClusterDisabled 1.3.6.1.4.1.6876.90.1.2.3.0.33
The Distributed Firewall has been disabeld on the cluster at the request of a user.
                 vmwNsxMFirewallForceSyncClusterFailed 1.3.6.1.4.1.6876.90.1.2.3.0.34
Force Sync operation for the cluster has failed. Use CLI commands to look at the logs and verify if any error messages appeared during the operation. See kb.vmware.com/kb/2125437. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
                 vmwNsxMFirewallForceSyncClusterSuccess 1.3.6.1.4.1.6876.90.1.2.3.0.35
Force Sync operation for cluster succeeded. Action required: None
                 vmwNsxMFirewallVsfwdProcessStarted 1.3.6.1.4.1.6876.90.1.2.3.0.36
vsfwd process started on host. Action required: None
         vmwNsxMEdge 1.3.6.1.4.1.6876.90.1.2.4
Notifications that are edge related will have this OID prefix.
             vmwNsxMEdgePrefix 1.3.6.1.4.1.6876.90.1.2.4.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for edge module.
                 vmwNsxMEdgeNoVmServing 1.3.6.1.4.1.6876.90.1.2.4.0.1
None of the Edge VMs found in serving state. There is a possibility of network disruption. Action required: System auto recovers from this state today. Event should be followed by traps with event code 30202 or 30203
                 vmwNsxMEdgeGatewayCreated 1.3.6.1.4.1.6876.90.1.2.4.0.2
Edge Gateway created. Action required: None
                 vmwNsxMEdgeVmBadState 1.3.6.1.4.1.6876.90.1.2.4.0.3
Edge VM in bad state. Needs a force sync. Action required: System auto triggres force sync but if problem is sustained then manual force sync should be triggered. For ESG force sync is disruptive and will reboot edge VMs.
                 vmwNsxMEdgeVmCommFailed 1.3.6.1.4.1.6876.90.1.2.4.0.4
Failed to communicate with the Edge VM. Action required: Need investigation depending upon comunication channel. Log needs to be checked for VIX error code for futher action.
                 vmwNsxMEdgeVmCnfgChanged 1.3.6.1.4.1.6876.90.1.2.4.0.5
A notification generated when NSX Edge VM configuration is changed. Action required: None
                 vmwNsxMEdgeGatewayDeleted 1.3.6.1.4.1.6876.90.1.2.4.0.6
A notification generated when Edge Gateway is deleted. Action required: None
                 vmwNsxMEdgeGatewayReDeployed 1.3.6.1.4.1.6876.90.1.2.4.0.7
A notification generated when Edge Gateway is redeployed. Action required: None
                 vmwNsxMEdgeVmPowerOff 1.3.6.1.4.1.6876.90.1.2.4.0.8
A notification generated when NSX Edge VM is powered off. Action required: None
                 vmwNsxMEdgeApplianceSizeChanged 1.3.6.1.4.1.6876.90.1.2.4.0.9
A notification generated when Edge appliance size has changed. Action required: None
                 vmwNsxMEdgeUpgrade51x 1.3.6.1.4.1.6876.90.1.2.4.0.10
A notification generated when Edge Gateway is upgraded to 5.1.x. Action required: None
                 vmwNsxMEdgeLicenseChanged 1.3.6.1.4.1.6876.90.1.2.4.0.11
A notification generated when Edge licensing changed on vCenter Server. Action required: None
                 vmwNsxMEdgeApplianceMoved 1.3.6.1.4.1.6876.90.1.2.4.0.12
A notification generated when Edge appliance is moved in the vCenter inventory.
                 vmwNsxMEdgeApplianceNotFound 1.3.6.1.4.1.6876.90.1.2.4.0.13
A notification generated when Edge appliance not found in the vCenter inventory. Action required: If VM is accidentally deleted, redeploy edge.
                 vmwNsxMEdgeVMHealthCheckMiss 1.3.6.1.4.1.6876.90.1.2.4.0.14
A notification generated when Edge VM is not responding to health check. Action required: Communicaiton issues between manager and edge. Log analysis required to root cause issue.
                 vmwNsxMEdgeHealthCheckMiss 1.3.6.1.4.1.6876.90.1.2.4.0.15
A notification generated when none of the Edge VMs are found in serving state. There is a possibility of network disruption. Action required: Commnunicaiton issues between manager and edge. Log analysis required to root cause issue.
                 vmwNsxMEdgeCommAgentNotConnected 1.3.6.1.4.1.6876.90.1.2.4.0.16
A notification generated when Edge Communication Agent is not connected to vCenter Server. Action required: Check VSM and VC connectivity. Try registering VSM to VC
                 vmwNsxMApplianceWithDifferentId 1.3.6.1.4.1.6876.90.1.2.4.0.17
A notification generated when Edge VM is discovered with a different vmId. Action required: None
                 vmwNsxMFirewallRuleModified 1.3.6.1.4.1.6876.90.1.2.4.0.18
A notification generated when Edge firewall rule is modified. Action required: Revisit firewall rule and perform required updates
                 vmwNsxMEdgeAntiAffinityRuleViolated 1.3.6.1.4.1.6876.90.1.2.4.0.19
A notification generated when powering on NSX Edge appliance violates a virtual machine anti-affinity rule. Action required: Anti affinity rules removed from cluster. Both HA VM may run on same host. Go to VC and please revisit anti affinity rules on Cluster
                 vmwNsxMEdgeHaEnabled 1.3.6.1.4.1.6876.90.1.2.4.0.20
A notification generated when NSX Edge HighAvailability is enabled. Action required: None
                 vmwNsxMEdgeHaDisabled 1.3.6.1.4.1.6876.90.1.2.4.0.21
A notification generated when NSX Edge HighAvailability is disabled. Action required: None
                 vmwNsxMEdgeGatewayRecovered 1.3.6.1.4.1.6876.90.1.2.4.0.22
A notification generated when NSX Edge Gateway has recovered and now responding to health check. Action required: None
                 vmwNsxMEdgeVmRecovered 1.3.6.1.4.1.6876.90.1.2.4.0.23
A notification generated when NSX Edge VM has recovered and now responding to health check. Actione required: None
                 vmwNsxMEdgeGatewayUpgraded 1.3.6.1.4.1.6876.90.1.2.4.0.24
A notification generated when Edge Gateway is upgraded. Action required: None
                 vmwNsxMEdgeVmHlthChkDisabled 1.3.6.1.4.1.6876.90.1.2.4.0.25
A notification generated when Edge VM health check is disabled on consecutive critical vix errors. Please redeploy or force sync vm to resume health check. Action required: This points to environmental issues that lead to repeated failure over vix. Log analysis needs to be done to identify root cause. Post resoving issues force sync edge vm to resume health check. Force sync and redeploy are disruptive operation.
                 vmwNsxMEdgePrePublishFailed 1.3.6.1.4.1.6876.90.1.2.4.0.26
A notification generated when Pre Publish has failed on Edge VM. Action required: Firewall rules might be out of sync. System auto recovers but if problem persists then trigger force sync.
                 vmwNsxMEdgeForcedSync 1.3.6.1.4.1.6876.90.1.2.4.0.27
A notification generated when Edge VM was force synced. Action required: None
                 vmwNsxMEdgeVmBooted 1.3.6.1.4.1.6876.90.1.2.4.0.28
A notification generated when Edge VM was booted. Action required: None
                 vmwNsxMEdgeVmInBadState 1.3.6.1.4.1.6876.90.1.2.4.0.29
A notification generated when Edge VM is in Bad State. Needs a force sync. Action required: Force sync required.
                 vmwNsxMEdgeVmCpuUsageIncreased 1.3.6.1.4.1.6876.90.1.2.4.0.30
A notification generated when Edge VM CPU usage has increased. Action required: Spikes are normal but collect tech support logs for further analysis if high CPU sustained for longer duration.
                 vmwNsxMEdgeVmMemUsageIncreased 1.3.6.1.4.1.6876.90.1.2.4.0.31
A notification generated when Edge VM Memory usage has increased. Action required: System recovers but collect tech support logs for further analysis.
                 vmwNsxMEdgeVmProcessFailure 1.3.6.1.4.1.6876.90.1.2.4.0.32
A notification generated when Edge VM process monitor detects a process failure. Action required: System recovers but collect tech support logs for further analysis.
                 vmwNsxMEdgeVmSysTimeBad 1.3.6.1.4.1.6876.90.1.2.4.0.33
A notification generated when Edge VM system time is bad. Action required: System recovers. Check NTP setting on hosts.
                 vmwNsxMEdgeVmSysTimeSync 1.3.6.1.4.1.6876.90.1.2.4.0.34
A notification generated when Edge VM system time sync up happens. Action required: None
                 vmwNsxMEdgeAesniCryptoEngineUp 1.3.6.1.4.1.6876.90.1.2.4.0.35
A notification generated when AESNI crypto engine is up. Action required: None
                 vmwNsxMEdgeAesniCryptoEngineDown 1.3.6.1.4.1.6876.90.1.2.4.0.36
A notification generated when AESNI crypto engine is down. Action required: None
                 vmwNsxMEdgeVmOom 1.3.6.1.4.1.6876.90.1.2.4.0.37
A notification generated when Edge VM is out of memory. The Edge is rebooting in 3 seconds. Action required: Collect tech support for further analysis.
                 vmwNsxMEdgeFileSysRo 1.3.6.1.4.1.6876.90.1.2.4.0.38
A notification generated when Edge file system is read only. Action required: Check datastore issues, once resolved force sync is required.
                 vmwNsxMEdgeHaCommDisconnected 1.3.6.1.4.1.6876.90.1.2.4.0.39
A notification generated when Edge HighAvailability communication channel is disconnected from peer node. Action required: None
                 vmwNsxMEdgeHaSwitchOverSelf 1.3.6.1.4.1.6876.90.1.2.4.0.40
A notification generated when High Availability is disabled for NSX Edge. The primary NSX Edge VM has its state transitioned from ACTIVE to SELF. High Availability (HA) ensures that NSX Edge services are always available, by deploying an additional Edge VM for failover. The primary NSX Edge VM is the ACTIVE node and the secondary VM is the STANDBY node. Whenever the ACTIVE VM is unreachable on account of VM powered off or network connectivity issues, the STANDBY VM takes over the ACTIVE vm's role. In the event NSX Edge High Availability is disabled, the STANDBY VM is deleted and the ACTIVE VM continues to function with its ACTIVE state transitioned to SELF. Action required: None
                 vmwNsxMEdgeHaSwitchOverActive 1.3.6.1.4.1.6876.90.1.2.4.0.41
A notification generated when High Availability switch over has happened for NSX Edge. The secondary NSX Edge VM has its state transitioned from STANDBY to ACTIVE. High Availability (HA) ensures that NSX Edge services are always available, by deploying an additional Edge VM for failover. The primary NSX Edge VM is the ACTIVE node and the secondary VM is the STANDBY node. Whenever the ACTIVE VM is unreachable on account of VM powered off or network connectivity issues, the STANDBY VM takes over the ACTIVE vm's role. Action required: None
                 vmwNsxMEdgeHaSwitchOverStandby 1.3.6.1.4.1.6876.90.1.2.4.0.42
A notification generated when High Availability switch over has happened for NSX Edge. The primary NSX Edge VM has its state transitioned from ACTIVE to STANDBY. High Availability (HA) ensures that NSX Edge services are always available, by deploying an additional Edge VM for failover. The primary NSX Edge VM is the ACTIVE node and the secondary VM is the STANDBY node. Whenever the ACTIVE VM is unreachable on account of VM powered off or network connectivity issues, the STANDBY VM takes over the ACTIVE vm's role. When connectivity is re-established between the NSX Edge VM's, one of the VM's state is transitioned from ACTIVE to STANDBY. Action required: None
                 vmwNsxMEdgeMonitorProcessFailure 1.3.6.1.4.1.6876.90.1.2.4.0.43
A notification generated when Edge process monitor detected a process failure. Action required: Collect tech support logs for further analysis.
                 vmwNsxMLbVirtualServerPoolUp 1.3.6.1.4.1.6876.90.1.2.4.0.44
A notification generated when LoadBalancer virtualServer/pool is up. Action required: None
                 vmwNsxMLbVirtualServerPoolDown 1.3.6.1.4.1.6876.90.1.2.4.0.45
A notification generated when LoadBalancer virtualServer/pool is down.
                 vmwNsxMLbVirtualServerPoolWrong 1.3.6.1.4.1.6876.90.1.2.4.0.46
A notification generated when LoadBalancer virtualServer/pool state is wrong.
                 vmwNsxMLbPoolWarning 1.3.6.1.4.1.6876.90.1.2.4.0.47
A notification generated when LoadBalancer pool changed to a warning state.
                 vmwNsxMIpsecChannelUp 1.3.6.1.4.1.6876.90.1.2.4.0.48
A notification generated when IPsec Channel is up. Action required: None
                 vmwNsxMIpsecChannelDown 1.3.6.1.4.1.6876.90.1.2.4.0.49
A notification generated when IPsec Channel is down. Action required: Collect tech support logs for further analysis.
                 vmwNsxMIpsecTunnelUp 1.3.6.1.4.1.6876.90.1.2.4.0.50
A notification generated when IPsec Tunnel is up. Action required: None
                 vmwNsxMIpsecTunnelDown 1.3.6.1.4.1.6876.90.1.2.4.0.51
A notification generated when IPsec Tunnel is down. Action required: Collect tech support logs for further analysis.
                 vmwNsxMIpsecChannelUnknown 1.3.6.1.4.1.6876.90.1.2.4.0.52
A notification generated when IPsec Channel status is unknown. Action required: Collect tech support logs for further analysis.
                 vmwNsxMIpsecTunnelUnknown 1.3.6.1.4.1.6876.90.1.2.4.0.53
A notification generated when IPsec Tunnel status is unknown. Action required: Collect tech support logs for further analysis.
                 vmwNsxMGlobalLbMemberUp 1.3.6.1.4.1.6876.90.1.2.4.0.54
A notification generated when Global Loadbalancer member status is up. Action required: None
                 vmwNsxMGlobalLbMemberWarning 1.3.6.1.4.1.6876.90.1.2.4.0.55
A notification generated when Global Loadbalancer member status is warning.
                 vmwNsxMGlobalLbMemberDown 1.3.6.1.4.1.6876.90.1.2.4.0.56
A notification generated when Global Loadbalancer member status is down.
                 vmwNsxMGlobalLbMemberUnknown 1.3.6.1.4.1.6876.90.1.2.4.0.57
A notification generated when Global Loadbalancer member status is unknown.
                 vmwNsxMGlobalLbPeerUp 1.3.6.1.4.1.6876.90.1.2.4.0.58
A notification generated when Global Loadbalancer peer status is up. Action required: None
                 vmwNsxMGlobalLbPeerDown 1.3.6.1.4.1.6876.90.1.2.4.0.59
A notification generated when Global Loadbalancer peer status is down.
                 vmwNsxMDhcpServiceDisabled 1.3.6.1.4.1.6876.90.1.2.4.0.60
A notification generated when DHCP Relay Service is disabled.
                 vmwNsxMEdgeResourceReservationFailure 1.3.6.1.4.1.6876.90.1.2.4.0.61
Insufficient CPU and/or Memory Resources available on Host or Resource Pool, during resource reservation at the time of NSX Edge deployment. Resources are explicitly reserved to ensure sufficient resources are available for NSX Edge to service High Availability. User can view the available resources vs reserved resources by navigating to the page Home > Hosts and Clusters > [Cluster-name] > Monitor > Resource Reservation. Action required: After checking available resources, re-specify the resources as part of appliance configuration so that resource reservation succeeds.
                 vmwNsxMEdgeSplitBrainDetected 1.3.6.1.4.1.6876.90.1.2.4.0.62
Split Brain detected for NSX Edge with HighAvailability. NSX Edge VM's configured for High Availability are unable to determine if the other VM is alive due to network failure. In such scenario, both the VM's think the other is not alive and take on the ACTIVE state. This may cause network disruption. Action required: User will need to check network infrastructure (virtual and physical) to look for any failures, specially on the interfaces and the path configured for HA.
                 vmwNsxMEdgeSplitBrainRecovered 1.3.6.1.4.1.6876.90.1.2.4.0.63
Resolved Split Brain for NSX Edge with HighAvailability. The network path used by the NSX Edge VM's High Availability has been re-established. NSX Edge VM's are able to communicate with each other, and one of the VM has taken the STANDBY role, resolving the ACTIVE-ACTIVE split brain scenario. Action required: None
                 vmwNsxMEdgeSplitBrainRecoveryAttempt 1.3.6.1.4.1.6876.90.1.2.4.0.64
Attempted Split Brain resolution for NSX Edge. Split Brain recovery will be attempted on NSX Edge versions prior to 6.2.3, which are not based on BFD. Action required: None
         vmwNsxMEndpoint 1.3.6.1.4.1.6876.90.1.2.5
Notifications that are Endpoint related will have this OID prefix.
             vmwNsxMEndpointPrefix 1.3.6.1.4.1.6876.90.1.2.5.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for Endpoint module.
                 vmwNsxMEndpointThinAgentEnabled 1.3.6.1.4.1.6876.90.1.2.5.0.1
A notification generated when Thin agent is enabled.
                 vmwNsxMGuestIntrspctnEnabled 1.3.6.1.4.1.6876.90.1.2.5.0.2
A notification generated when Guest Introspection solution is enabled.
                 vmwNsxMGuestIntrspctnIncompatibleEsx 1.3.6.1.4.1.6876.90.1.2.5.0.3
A notification generated when Guest Introspection solution was contacted by an incompatible version of the ESX module.
                 vmwNsxMGuestIntrspctnEsxConnFailed 1.3.6.1.4.1.6876.90.1.2.5.0.4
A notification generated when connection between the ESX module and the Guest Introspection solution failed.
                 vmwNsxMGuestIntrspctnStatusRcvFailed 1.3.6.1.4.1.6876.90.1.2.5.0.5
A notification generated when failed to receive status from Guest Introspection solution.
                 vmwNsxMEsxModuleEnabled 1.3.6.1.4.1.6876.90.1.2.5.0.6
A notification generated when ESX module is enabled.
                 vmwNsxMEsxModuleUninstalled 1.3.6.1.4.1.6876.90.1.2.5.0.7
A notification generated when ESX module is uninstalled.
                 vmwNsxMGuestIntrspctnHstMxMssngRep 1.3.6.1.4.1.6876.90.1.2.5.0.8
A notification generated when Guest Introspection host MUX is missing report.
                 vmwNsxMEndpointUndefined 1.3.6.1.4.1.6876.90.1.2.5.0.9
A notification generated when Endpoint is undefined.
         vmwNsxMEam 1.3.6.1.4.1.6876.90.1.2.6
Notifications that are Eam related will have this OID prefix.
             vmwNsxMEamPrefix 1.3.6.1.4.1.6876.90.1.2.6.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for Eam module.
                 vmwNsxMEamGenericAlarm 1.3.6.1.4.1.6876.90.1.2.6.0.1
EAM reports problems to NSX during vib/service VM install/upgrade as these traps. Action required: Use resolve API to resolve the Alarm. Frequency of traps: N times per cluster per user action, where N is number of hosts in a cluster.
         vmwNsxMFabric 1.3.6.1.4.1.6876.90.1.2.7
Notifications that are Fabric related will have this OID prefix.
             vmwNsxMFabricPrefix 1.3.6.1.4.1.6876.90.1.2.7.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for Fabric module.
                 vmwNsxMFabricDplymntStatusChanged 1.3.6.1.4.1.6876.90.1.2.7.0.1
The status of a service on a cluster has changed. It can change to RED(Failure), GREEN(Success), YELLOW(in-progress). Action required: RED state would be accompanied with an EAM Alarm/Event/Trap, that indicates root cause. Use resolver API to fix it. Frequency of traps: Once per state change. State could change 2-3 times per user operation [Deploy/Undeploy/Update]
                 vmwNsxMFabricDplymntUnitCreated 1.3.6.1.4.1.6876.90.1.2.7.0.2
NSX Manager has created the required objects for deploying a service on a cluster. This would be followed by deployment of the service on all hosts in the cluster. Action required: None Frequency: Once per cluster
                 vmwNsxMFabricDplymntUnitUpdated 1.3.6.1.4.1.6876.90.1.2.7.0.3
NSX Manager has made changes in the objects required for deploying a service on a cluster. This would be followed by updation of the service on all hosts in the cluster. Action required: None Frequency of traps: Once per cluster per user operation [Update]
                 vmwNsxMFabricDplymntUnitDestroyed 1.3.6.1.4.1.6876.90.1.2.7.0.4
A service has been removed from all hosts in a cluster. NSX Manager has deleted the objects for the service on the cluster. Action required: None Frequency of traps: Once per cluster
                 vmwNsxMDataStoreNotCnfgrdOnHost 1.3.6.1.4.1.6876.90.1.2.7.0.5
Datastore could not be configured on host, probably its not connected. Action required: Ensure that datastore is connected to the host. Use resolve API to resolve the Alarm. Service will be deployed. Frequency of traps: Once per cluster per user operation [Deploy].
                 vmwNsxMFabricDplymntInstallationFailed 1.3.6.1.4.1.6876.90.1.2.7.0.6
Installation of service failed, please check if ovf/vib urls are accessible, in correct format and all the properties in ovf environment have been configured in service attributes. Please check logs for details. Action required: Ensure that ovf/vib urls accessible from VC and are in correct format. Use resolve API to resolve the Alarm. Service will be deployed. Frequency of traps: Once per cluster per user operation [Deploy].
                 vmwNsxMFabricAgentCreated 1.3.6.1.4.1.6876.90.1.2.7.0.7
The service has been successfully installed on a host. Action required: None Frequency of traps: N times per cluster, where N is number of hosts in a cluster.
                 vmwNsxMFabricAgentDestroyed 1.3.6.1.4.1.6876.90.1.2.7.0.8
The service has been successfully removed from a host. Action required: None Frequency of traps: N times per cluster, where N is number of hosts in a cluster.
                 vmwNsxMFabricSrvceNeedsRedplymnt 1.3.6.1.4.1.6876.90.1.2.7.0.9
Service will need to be redeployed as the location of the OVF / VIB bundles to be deployed has changed. Action required: Use resolve API to resolve the Alarm. Service will be redeployed. Frequency of traps: N times per NSX Manager IP change, where N is number of cluster and service combinations deployed.
                 vmwNsxMUpgradeOfDplymntFailed 1.3.6.1.4.1.6876.90.1.2.7.0.10
Upgrade of deployment unit failed, please check if ovf/vib urls are accessible, in correct format and all the properties in ovf environment have been configured in service attributes. Please check logs for details. Action required: Ensure that ovf/vib urls accessible from VC and are in correct format. Use resolve API to resolve the Alarm. Service will be redeployed. Frequency of traps: Once per cluster per user operation [Upgrade]
                 vmwNsxMFabricDependenciesNotInstalled 1.3.6.1.4.1.6876.90.1.2.7.0.11
The service being installed is dependent on another service that has not yet been installed. Action required: Deploy the required service on the cluster. Frequency of traps: Once per cluster per user operation [Deploy]
                 vmwNsxMFabricErrorNotifSecBfrUpgrade 1.3.6.1.4.1.6876.90.1.2.7.0.12
Error while notifying security solution before upgrade. The solution may not be reachable/responding. Action required: Ensure that solution urls are accessible from NSX. Use resolve API to resolve the Alarm. Service will be redeployed. Frequency of traps: Once per cluster per user operation [Upgrade]
                 vmwNsxMFabricErrCallbackNtRcvdUpgrade 1.3.6.1.4.1.6876.90.1.2.7.0.13
Did not receive callback from security solution for upgrade notification even after timeout. Action required: Ensure that solution urls are accessible from NSX, and NSX is reachable from the solution. Use resolve API to resolve the Alarm. Service will be redeployed. Frequency : Once per cluster per user operation [Upgrade]
                 vmwNsxMFabricErrCallbackNtRcvdUninstall 1.3.6.1.4.1.6876.90.1.2.7.0.14
Uninstallation of service failed. Action required: Ensure that solution urls are accessible from NSX, and NSX is reachable from the solution. Use resolve API to resolve the Alarm. Service will be removed. Frequency of traps: Once per cluster per user operation [Uninstall]
                 vmwNsxMFabricUninstallServiceFailed 1.3.6.1.4.1.6876.90.1.2.7.0.15
Error while notifying security solution before uninstall. Resolve to notify once again, or delete to uninstall without notification. Action required: Ensure that solution urls are accessible from NSX, and NSX is reachable from the solution. Use resolve API to resolve the Alarm. Service will be removed. Frequency of traps: Once per cluster per user operation [Uninstall]
                 vmwNsxMFabricErrorNotifSecBfrUninstall 1.3.6.1.4.1.6876.90.1.2.7.0.16
Error while notifying security solution before uninstall. Resolve to notify once again, or delete to uninstall without notification. Action required: Ensure that solution urls are accessible from NSX, and NSX is reachable from the solution. Use resolve API to resolve the Alarm. Service will be removed. Frequency of traps: Once per cluster per user operation [Uninstall]
                 vmwNsxMFabricServerRebootUninstall 1.3.6.1.4.1.6876.90.1.2.7.0.17
Server rebooted while security solution notification for uninstall was going on. Action required: Ensure that solution urls are accessible from NSX. Use resolve API to resolve the Alarm. Service will be uninstalled. Frequency of traps: Once per cluster per user operation [Uninstall]
                 vmwNsxMFabricServerRebootUpgrade 1.3.6.1.4.1.6876.90.1.2.7.0.18
Server rebooted while security solution notification for upgrade was going on. Action required: Ensure that solution urls are accessible from NSX. Use resolve API to resolve the Alarm. Service will be redeployed. Frequency of traps: Once per cluster per user operation [Upgrade]
                 vmwNsxMFabricConnEamFailed 1.3.6.1.4.1.6876.90.1.2.7.0.19
NSX Manager relies on the ESX Agent Manager service in VC for deploying/monitoring NSX vibs on ESX. The connection to this EAM service has gone down. This could be due to EAM service or VC restart/stop or an issue in the EAM service. Action required: In the NSX UI, traverse to Manage, then NSX Management Service. Verify that the status of VC connection on this page is Green. Use the VC IP to verify that EAM is UP by visiting https:///eam/mob. Frequency of traps: Once per switch from success to failed EAM connection
                 vmwNsxMFabricConnEamRestored 1.3.6.1.4.1.6876.90.1.2.7.0.20
NSX Manager relies on the EAM service in VC for deploying/monitoring NSX vibs on ESX. The connection of NSX to this EAM service was re-established successfully. Action required: None Frequency of traps: Once per switch from failed to success EAM connection
                 vmwNsxMFabricPreUninstallCleanUpFailed 1.3.6.1.4.1.6876.90.1.2.7.0.21
Pre Uninstall cleanup failed. Action required: Use resolve API to resolve the Alarm. Service will be removed. Frequency of traps: Once per cluster per user operation [Uninstall]
                 vmwNsxMFabricBackingEamNotFound 1.3.6.1.4.1.6876.90.1.2.7.0.22
The backing EAM agency for this deployment could not be found. It is possible that the VC services may still be initializing. Please try to resolve the alarm to check existence of the agency. In case you have deleted the agency manually, please delete the deployment entry from NSX. Action required: Use resolve API to check existence of the agency, if backing agency exists in EAM, else delete the deployment entry from NSX. Frequency of traps: Once per cluster.
         vmwNsxMDepPlugin 1.3.6.1.4.1.6876.90.1.2.8
Notifications that are DeploymentPlugin related will have this OID prefix.
             vmwNsxMDepPluginPrefix 1.3.6.1.4.1.6876.90.1.2.8.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for DeploymentPlugin module.
                 vmwNsxMDepPluginIpPoolExhausted 1.3.6.1.4.1.6876.90.1.2.8.0.1
When deploying Guest Introspection or other VM based service with static IP, NSX Manager needs to have a IP pool, for IP assignment to the VM. This pool has been exhausted, and new service VMs cannot be provisioned. Action required: Traverse to the Networking & Security page on VMWare vSphere Web Client, then go to Installation, followed by Service Deployments. Note the IP pool name for the failed service. Now traverse to NSX Managers, then go to Manage tab, followed by Grouping Objects sub-tab. Click on IP Pools, and add more Ips to the static IP pool. Use resolve API to resolve the Alarm. Service will be deployed. Frequency of traps: N times per cluster, where N is number of hosts in the cluster.
                 vmwNsxMDepPluginGenericAlarm 1.3.6.1.4.1.6876.90.1.2.8.0.2
Deployment plugin generic alarm. Action required: Use resolve API to resolve the Alarm. Service will be deployed. Frequency of traps: N times per cluster, where N is number of hosts in the cluster.
                 vmwNsxMDepPluginGenericException 1.3.6.1.4.1.6876.90.1.2.8.0.3
Deployment plugin generic exception alarm. Action required: Use resolve API to resolve the Alarm. Service will be deployed. Frequency of traps: N times per cluster, where N is number of hosts in the cluster.
                 vmwNsxMDepPluginVmReboot 1.3.6.1.4.1.6876.90.1.2.8.0.4
VM needs to be rebooted for some changes to be made/take effect. Action required: Use resolve API to resolve the Alarm. Frequency of traps: N times per cluster, where N is number of hosts in the cluster.
         vmwNsxMMessaging 1.3.6.1.4.1.6876.90.1.2.9
Notifications that are Messaging related will have this OID prefix.
             vmwNsxMMessagingPrefix 1.3.6.1.4.1.6876.90.1.2.9.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for Messaging module.
                 vmwNsxMMessagingConfigFailed 1.3.6.1.4.1.6876.90.1.2.9.0.1
A notification generated when host messaging configuration failed.
                 vmwNsxMMessagingReconfigFailed 1.3.6.1.4.1.6876.90.1.2.9.0.2
A notification generated when host messaging connection reconfiguration failed.
                 vmwNsxMMessagingConfigFailedNotifSkip 1.3.6.1.4.1.6876.90.1.2.9.0.3
A notification generated when host messaging configuration failed and notifications were skipped.
                 vmwNsxMMessagingInfraUp 1.3.6.1.4.1.6876.90.1.2.9.0.4
Manager runs a heartbeat with all hosts it manages. Missing heartbeat responses from a host indicate a communication issue between manager and the host. Such instances are indicated by event code 391002. When the communication is restored after such an instance, it is indicated by this event/trap. Action required: Refer to KB article https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2133897 Frequency of traps: Will be seen within 3 minutes of communication being restored between manager and a host. URL: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2133897
                 vmwNsxMMessagingInfraDown 1.3.6.1.4.1.6876.90.1.2.9.0.5
Manager runs a heartbeat with all hosts it manages. Missing heartbeat responses from a host indicate a communication issue between manager and the host. In the case of such a communication issue, this trap will be sent. Action required: Refer to KB article https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2133897 Frequency of traps: Will be seen within 6 minutes of a communication failure between manager and a host. URL: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2133897
                 vmwNsxMMessagingDisabled 1.3.6.1.4.1.6876.90.1.2.9.0.6
A messaging client such as a Host, an Edge appliance or a USVM appliance is expected to change its password within 2 hours of being prepped or deployed. If the password isn't changed in this duration, the messaging account for the client is disabled. Action required: This event will indicate communication issue between the manager and the client. Verify if the client is running. If running, in case of a Host, re-sync messaging. In case of an Edge or a USVM, redeploy. Frequency of traps: Will be seen 2 hours after prep, host re-sync or deployment of appliance. URL: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2133897
         vmwNsxMServiceComposer 1.3.6.1.4.1.6876.90.1.2.10
Notifications that are ServiceComposer related will have this OID prefix.
               vmwNsxMServiceComposerPrefix 1.3.6.1.4.1.6876.90.1.2.10.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for ServiceComposer module.
                   vmwNsxMServiceComposerPolicyOutOfSync 1.3.6.1.4.1.6876.90.1.2.10.0.1
Service Composer encountered an error while attempting to enforce rules on this Policy. Action required: Administrator needs to check the rules on the given Policy for any errors, as reported in the message. After fixing the rules in the Policy, user would need to resolve the alarm to bring this Policy back in sync. Policy's alarm can either be resolved from NSX Manager Service Composer UI or by using alarms API. Frequency of traps: This trap is generated only once, if an error is encountered while enforcing the Policy.
                   vmwNsxMServiceComposerPolicyDeleted 1.3.6.1.4.1.6876.90.1.2.10.0.2
A Policy got deleted as a result of the internal SecurityGroup, over which the Policy was created, got deleted. Frequency of traps: This event is generated once every time any internal SecurityGroup, that is being consumed by a policy, gets deleted.
                   vmwNsxMServiceComposerFirewallPolicyOutOfSync 1.3.6.1.4.1.6876.90.1.2.10.0.3
Service Composer encountered an error while attempting to enforce Firewall rules on this Policy. Firewall related changes on this Policy will not take effect, until this alarm is resolved. Action required: Administrator needs to check the rules on the given Policy for any errors, as reported in the message. After fixing the rules in the Policy, user would need to resolve the alarm to bring this Policy back in sync. Policy's alarm can either be resolved from NSX Manager Service Composer UI or by using alarms API. Frequency of traps: This trap is generated only once, if an error is encountered while enforcing the Policy.
                   vmwNsxMServiceComposerNetworkPolicyOutOfSync 1.3.6.1.4.1.6876.90.1.2.10.0.4
Service Composer encountered an error while attempting to enforce Network Introspection rules on this Policy. Network Introspection related changes on this Policy will not take effect, until this alarm is resolved. Action required: Administrator needs to check the rules on the given Policy for any errors, as reported in the message. After fixing the rules in the Policy, user would need to resolve the alarm to bring this Policy back in sync. Policy's alarm can either be resolved from NSX Manager Service Composer UI or by using alarms API. Frequency of traps: This trap is generated only once, if an error is encountered while enforcing the Policy.
                   vmwNsxMServiceComposerGuestPolicyOutOfSync 1.3.6.1.4.1.6876.90.1.2.10.0.5
Service Composer encountered an error while attempting to enforce Guest Introspection rules on this Policy. Guest Introspection related changes on this Policy will not take effect, until this alarm is resolved. Action required: Administrator needs to check the rules on the given Policy for any errors, as reported in the message. After fixing the rules in the Policy, user would need to resolve the alarm to bring this Policy back in sync. Policy's alarm can either be resolved from NSX Manager Service Composer UI or by using alarms API. Frequency of traps: This trap is generated only once, if an error is encountered while enforcing the Policy.
                   vmwNsxMServiceComposerOutOfSync 1.3.6.1.4.1.6876.90.1.2.10.0.6
Service Composer encountered an error synchronizing Policies. Any changes on Service Composer will not be pushed to Firewall/Network Introspection Services, until this alarm is resolved. Action required: Administrator needs to check Policies and/or Firewall sections for any errors, as reported in the message. After fixing the errors, user would need to resolve the alarm to bring Service Composer back in sync. Alarm can either be resolved from NSX Manager Service Composer UI or by using alarms API. Frequency of traps: This trap is generated only once, whenever an error is encountered.
                   vmwNsxMServiceComposerOutOfSyncRebootFailure 1.3.6.1.4.1.6876.90.1.2.10.0.7
Service Composer encountered an error while synchronizing Policies on reboot. Action required: Administrator needs to check Policies and/or Firewall config for any errors, as reported in the message. After fixing the errors, user would need to resolve the alarm to bring Service Composer back in sync. Alarm can either be resolved from NSX Manager Service Composer UI or by using alarms API. Frequency of traps: This trap is generated only once on NSX Manager reboot, if an error is encountered.
                   vmwNsxMServiceComposerOutOfSyncDraftRollback 1.3.6.1.4.1.6876.90.1.2.10.0.8
Service Composer went out of sync due to rollback of drafts from Firewall. Any changes on Service Composer will not be pushed to Firewall/Network Introspection Services, until this alarm is resolved. Action required: Administrator needs to resolve the alarm to bring Service Composer back in sync. Alarm can either be resolved from NSX Manager Service Composer UI or by using alarms API. Frequency of traps: This trap is generated only once, whenever Firewall config is reverted to an older version of drafts.
                   vmwNsxMServiceComposerOutOfSyncSectionDeletionFailure 1.3.6.1.4.1.6876.90.1.2.10.0.9
Service Composer encountered an error while deleting the section corresponding to the Policy. This generally happens if third party(NetX) service's Manager is not reachable. Action required: Administrator needs to check connectivity with third party(NetX) service's Manager. Once the connectivity is restored, user would need to resolve the alarm. Alarm can either be resolved from Service Composer UI or by using alarms API. Frequency of traps: This trap is generated only once if a failure is encountered while deleting a Policy's section on Policy deletion.
                   vmwNsxMServiceComposerOutOfSyncPrecedenceChangeFailure 1.3.6.1.4.1.6876.90.1.2.10.0.10
Service Composer encountered an error reordering sections to reflect Policy's precedence change. This generally happens if there are Alarms on any other Policy. Action required: Administrator needs to check Policies and/or Firewall sections for any errors, as reported in the message. After fixing the errors, user would need to resolve the alarm. Alarm can either be resolved from NSX Manager Service Composer UI or by using alarms API. Frequency of traps: This trap is generated only once if a failure is encountered while reordering section to reflect precedence change.
                   vmwNsxMServiceComposerOutOfSyncDraftSettingFailure 1.3.6.1.4.1.6876.90.1.2.10.0.11
Service Composer encountered an error while initializing auto save drafts setting. Action required: Administrator needs to check Policies and/or Firewall sections for any errors, as reported in the message. After fixing the errors, user would need to resolve the alarm. Alarm can either be resolved from NSX Manager Service Composer UI or by using alarms API. Frequency of traps: This trap is generated only once if a failure is encountered while initializing auto save drafts setting.
         vmwNsxMSvmOperations 1.3.6.1.4.1.6876.90.1.2.11
Notifications that are SvmOperations related will have this OID prefix.
               vmwNsxMSvmOperationsPrefix 1.3.6.1.4.1.6876.90.1.2.11.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for SvmOperations module.
                   vmwNsxMInconsistentSvmAlarm 1.3.6.1.4.1.6876.90.1.2.11.0.1
Service VMs are deployed per ESX host, to provide functionality like guest introspection and McAfee/Trend virus checking in VMs on the host. An issue is detected with the state of the deployed Service VM. Follow instructions in http://kb.vmware.com/kb/2125482 to analyze the logs further. Warning: Resolving this alarm will delete the VM. After deletion you will see a different alarm saying VM is deleted. If you resolve same, it will reinstall the VM. If redeployment of the VM does not fix the original issue, the original alarm will be added back immediately. Action required: Use resolve API to resolve the Alarm. Frequency of traps: Once per host.
                   vmwNsxMSvmRestartAlarm 1.3.6.1.4.1.6876.90.1.2.11.0.2
Service VMs are deployed per ESX host, to provide functionality like guest introspection and McAfee/Trend virus checking in VMs on the host. An issue is detected with the state of the deployed Service VM. Follow instructions in http://kb.vmware.com/kb/2125482 to analyze the logs further. Warning: Resolving this alarm will restart the VM. If the root cause here is not solved, the same alarm will be added back immediately. Action required: Use resolve API to resolve the Alarm. Frequency of traps: Once per host.
                   vmwNsxMSvmAgentUnavailable 1.3.6.1.4.1.6876.90.1.2.11.0.3
An issue is detected while marking agent as available. Kindly check the logs. Resolving this alarm will attempt to mark the agent as available. Action required: Use resolve API to resolve the Alarm. Frequency of traps: Once per host.
         vmwNsxMTranslation 1.3.6.1.4.1.6876.90.1.2.12
Notifications that are Translation related will have this OID prefix.
               vmwNsxMTranslationPrefix 1.3.6.1.4.1.6876.90.1.2.12.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for Translation module.
                   vmwNsxMVmAddedToSg 1.3.6.1.4.1.6876.90.1.2.12.0.1
A VM has got added to the SecurityGroup. Frequency of traps: Once for every VM getting added to any SecurityGroup.
                   vmwNsxMVmRemovedFromSg 1.3.6.1.4.1.6876.90.1.2.12.0.2
A VM has got removed from the SecurityGroup. Frequency of traps: Once for every VM getting removed from any SecurityGroup.
         vmwNsxMUniversalSync 1.3.6.1.4.1.6876.90.1.2.13
Notifications that are UniversalSync related will have this OID prefix.
               vmwNsxMUniversalSyncPrefix 1.3.6.1.4.1.6876.90.1.2.13.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for UniversalSync module.
                   vmwNsxMFullUniversalSyncFailed 1.3.6.1.4.1.6876.90.1.2.13.0.1
A failure is encountered when doing full sync of universal objects on a secondary NSX manager. IP address of the secondary NSX manager is present in event's message variable. Action required: Kindly check NSX manager logs on the secondary NSX manager on which the full sync has failed. Frequency of traps: This trap is generated once per NSX manager on which full sync failure is seen.
                   vmwNsxMSecondaryDown 1.3.6.1.4.1.6876.90.1.2.13.0.2
Secondary NSX manager is unreachable. Action required: Kindly check if NSX manager is running and is reachable from primary NSX manager. IP address of the secondary NSX manager is present in event's message variable. Frequency of traps: This trap is generated once per NSX manager for which connection issue is seen.
                   vmwNsxMUniversalSyncFailedForEntity 1.3.6.1.4.1.6876.90.1.2.13.0.3
A failure is encountered when doing sync of universal object on a secondary NSX manager. IP address of the secondary NSX manager is present in event's message variable. Action required: Kindly check NSX manager logs on the secondary NSX manager on which the sync has failed. Frequency of traps: This trap is generated once per universal object on a NSX manager on which sync failure is seen.
         vmwNsxMAsyncRest 1.3.6.1.4.1.6876.90.1.2.14
Notifications that are AsyncRest related will have this OID prefix.
               vmwNsxMAsyncRestPrefix 1.3.6.1.4.1.6876.90.1.2.14.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for AsyncRest module.
                   vmwNsxMServerUp 1.3.6.1.4.1.6876.90.1.2.14.0.1
Denotes that NSX manager server is up and in running state, Informs clients of NSX Manager of the current state. Action required: None Frequency of traps: Once for every query
         vmwNsxMExtensionRegistration 1.3.6.1.4.1.6876.90.1.2.15
Notifications that are ExtensionRegistration related will have this OID prefix.
               vmwNsxMExtensionRegistrationPrefix 1.3.6.1.4.1.6876.90.1.2.15.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for ExtensionRegistration module.
                   vmwNsxMExtensionRegistered 1.3.6.1.4.1.6876.90.1.2.15.0.1
Registers NSX manager as a vCenter extenstion. This is applicable when no other NSX Manager is registered with vCenter and the current NSX manager is the one registering with vCenter. Action required: None Frequency of traps: Only once when the extension is registered for the very first time.
                   vmwNsxMExtensionUpdated 1.3.6.1.4.1.6876.90.1.2.15.0.2
Updates the vCenter extension registration with the new NSX Manager. This is applicable when there already exists another NSX manager that is registered as a vCenter extension and the current one overwrites it. Action required: None Frequency of traps: Every time a NSX Manager registers as a vCenter extension when there already exists another NSX manager registered with vCenter
         vmwNsxMDlp 1.3.6.1.4.1.6876.90.1.2.16
Notifications that are Dlp related will have this OID prefix.
               vmwNsxMDlpPrefix 1.3.6.1.4.1.6876.90.1.2.16.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for Dlp module.
                   vmwNsxMDataSecScanStarted 1.3.6.1.4.1.6876.90.1.2.16.0.1
A notification generated when NSX Data Security scan started on VirtualMachine.
                   vmwNsxMDataSecScanEnded 1.3.6.1.4.1.6876.90.1.2.16.0.2
A notification generated when NSX Data Security scan ended on VirtualMachine.
         vmwNsxMSamSystem 1.3.6.1.4.1.6876.90.1.2.17
Notifications that are SamSystem related will have this OID prefix.
               vmwNsxMSamSystemPrefix 1.3.6.1.4.1.6876.90.1.2.17.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for SamSystem module.
                   vmwNsxMSamDataCollectionEnabled 1.3.6.1.4.1.6876.90.1.2.17.0.1
Service Activity Monitoring will start collecting data. Action required: None Frequency of traps: Event is triggered when SAM data collection state is toggled.
                   vmwNsxMSamDataCollectionDisabled 1.3.6.1.4.1.6876.90.1.2.17.0.2
Service Activity Monitoring will stop collecting data. Action required: SAM data collection can be enabled to start collectiing data. Frequency of traps: Event is triggered when SAM data collection state is toggled
                   vmwNsxMSamDataStoppedFlowing 1.3.6.1.4.1.6876.90.1.2.17.0.3
Service Activity Monitoring data stopped flowing from USVM Action required: Check the following - USVM log to see if heartbeats are recieved and sent - is the USVM running - is the Mux - USVM connection healthy - is the USVM - RMQ connection healthy - does the VM have endpoint driver installed Frequency of traps: Event is triggered when NSX Manager does not receives SAM data from USVM
                   vmwNsxMSamDataResumedFlowing 1.3.6.1.4.1.6876.90.1.2.17.0.4
Service Activity Monitoring data resumes flowing from USVM Action required: None Frequency of traps: Event is triggered when SAM data is received from USVM.
         vmwNsxMUsvm 1.3.6.1.4.1.6876.90.1.2.18
Notifications that are Usvm related will have this OID prefix.
               vmwNsxMUsvmPrefix 1.3.6.1.4.1.6876.90.1.2.18.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for Usvm module.
                   vmwNsxMUsvmHeartbeatStopped 1.3.6.1.4.1.6876.90.1.2.18.0.1
USVM stopped sending heartbeats to management plane. Action required: Connection to NSX Manager was lost. Check why the Manager didn't send a heartbeat. Frequency of traps: Event is triggered when NSX Manager does not receives heartbeats from USVM
                   vmwNsxMUsvmHeartbeatResumed 1.3.6.1.4.1.6876.90.1.2.18.0.2
USVM will start sending heartbeats to management plane. Action required: None Frequency of traps: Event is triggered when NSX Manager receives heartbeats from USVM
                   vmwNsxMUsvmReceivedHello 1.3.6.1.4.1.6876.90.1.2.18.0.3
USVM sent a HELLO message to Mux Action: None Frequency of traps: Event is triggered when Epsec Mux receives HELLO message from USVM during initial connection establishement.
         vmwNsxMVsmCore 1.3.6.1.4.1.6876.90.1.2.19
Notifications that are VsmCore related will have this OID prefix.
               vmwNsxMVsmCorePrefix 1.3.6.1.4.1.6876.90.1.2.19.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for VsmCore module.
                   vmwNsxMUpgradeSuccess 1.3.6.1.4.1.6876.90.1.2.19.0.1
A notification generated when NSX Manager upgraded successfully.
                   vmwNsxMRestoreSuccess 1.3.6.1.4.1.6876.90.1.2.19.0.2
A notification generated when NSX Manager restored successfully.
                   vmwNsxMDuplicateIp 1.3.6.1.4.1.6876.90.1.2.19.0.3
The NSX Manager IP has been assigned to another machine Action: None Frequency: This is triggered whenever NSX Manager detects that its IP address is being used by another machine in the same network
         vmwNsxMVxlan 1.3.6.1.4.1.6876.90.1.2.20
Notifications that are Vxlan related will have this OID prefix.
               vmwNsxMVxlanPrefix 1.3.6.1.4.1.6876.90.1.2.20.0
This group is actually the prefix one uses when creating vmware NSX manager specific trap OID's for Vxlan module.
                   vmwNsxMVxlanLogicalSwitchImproperlyCnfg 1.3.6.1.4.1.6876.90.1.2.20.0.1
This event is triggered if one or more distributed virtual port groups backing a certain Logical Switch were modified and/or removed. Or if migration of Control plane mode for a Logical Switch/Transport Zone failed. Action required: (1) If the event was triggered due to deletion/modification of backing distributed virtual port groups, then the error will be visible on Logical Switch UI page. Resolve from there will try and create missing distributed virtual port groups for the Logical Switch. (2) If event was triggered due to failure of Control plan mode migration, redo the migration for that Logical Switch or Transport Zone. Frequency of traps: Event is triggered due to user actions as explained in description. Affects: Logical Switch network traffic.
                   vmwNsxMVxlanLogicalSwitchProperlyCnfg 1.3.6.1.4.1.6876.90.1.2.20.0.2
Logical Switch status has been marked good, most probably as result of resolving any errors on it. Action required: None Frequency of traps: Event is triggered when user resolves the Logical Switch error and as a result missing backing distributed virtual port groups are recreated.
                   vmwNsxMVxlanInitFailed 1.3.6.1.4.1.6876.90.1.2.20.0.3
Failed to configure vmknic as a VTEP, VXLAN traffic through this interface will be dropped until this is resolved. Action required: Check the host's vmkernel.log for more details. Frequency of traps: Every time a VTEP vmknic tries to connect to it's Distributed Virtual Port. Affects: VXLAN traffic on the affected Host.
                   vmwNsxMVxlanPortInitFailed 1.3.6.1.4.1.6876.90.1.2.20.0.4
Failed to configure VXLAN on the Distributed Virtual Port, the port will be disconnected. Action required: Check the host's vmkernel.log for more details. Frequency of traps: Every time a VXLAN vNic tries to connect to it's Distributed Virtual Port on the host. Affects: VXLAN traffic on the affected Host.
                   vmwNsxMVxlanInstanceDoesNotExist 1.3.6.1.4.1.6876.90.1.2.20.0.5
VXLAN configuration was received for a Distributed Virtual Port, but the host has not yet enabled VXLAN on the vSphere Distributed Switch. VXLAN ports on affected Host will fail to connect until resolved. Action required: See KB 2107951 (https://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2107951&sliceId=1&docTypeID=DT_KB_1_1&dialogID=40732862&stateId=0%200%2040754197) Frequency of traps: Every time any VXLAN related port (vNic or vmknic) tries to connect to it's Distributed Virtual Port on the host. Affects: VXLAN Traffic on that Host.
                   vmwNsxMVxlanLogicalSwitchWrkngImproperly 1.3.6.1.4.1.6876.90.1.2.20.0.6
VTEP interface was unable to join the specified multicast address, the VTEP will be unable to receive some traffic from other hosts until this is resolved. The host will periodically retry joining the group until it is successful. Action required: Check the host's vmkernel.log for more details. Frequency of traps: NSX retries joining failed mcast groups every 5 seconds. Affects: Logical Switch associated with problem VTEP interface won't work properly.
                   vmwNsxMVxlanTransportZoneIncorrectlyWrkng 1.3.6.1.4.1.6876.90.1.2.20.0.7
The IP address of a VTEP vmknic has changed. Action required: None. Frequency of traps: Every time a VTEP IP changes
                   vmwNsxMVxlanTransportZoneNotUsed 1.3.6.1.4.1.6876.90.1.2.20.0.8
VTEP vmknic does not have a valid IP address assigned, all VXLAN traffic through this vmknic will be dropped. Action required: Verify the IP configuration for the interface, and the DHCP server if DHCP is used. Frequency of traps: Once per VTEP loosing it's IP address.
                   vmwNsxMVxlanOverlayClassMissingOnDvs 1.3.6.1.4.1.6876.90.1.2.20.0.9
NSX packages where not installed prior to DVS configuration for VXLAN. All VXLAN ports will fail to connect until resolved. Action required: See KB 2107951 https://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2107951&sliceId=1&docTypeID=DT_KB_1_1&dialogID=40732862&stateId=0%200%2040754197 Frequency of traps: Once per setting of the com.vmware.netoverlay.layer0=vxlan opaque property or whenver the host is configured for vxlan or Host reconnects to VCEnter and host has some problem. Affects: VXLAN Traffic for that Host will be affected.
                   vmwNsxMVxlanControllerRemoved 1.3.6.1.4.1.6876.90.1.2.20.0.10
A notification generated when VXLAN Controller has been removed due to the connection cant be built, please check controller IP configuration and deploy again.
                   vmwNsxMVxlanControllerConnProblem 1.3.6.1.4.1.6876.90.1.2.20.0.11
NSX manager detected the connection between two controller nodes is broken. Action required: It is a warning event, users need to check the controller cluster for the further steps. Check following KB 2127655 https://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2127655&sliceId=1&docTypeID=DT_KB_1_1&dialogID=40732913&stateId=0%200%2040754965 to see if issue matches. Frequency of traps: Whenever the controller reports the issue. Affects: Networking might get affected.
                   vmwNsxMVxlanControllerInactive 1.3.6.1.4.1.6876.90.1.2.20.0.12
Host Certification information couldn't be sent to NSX Controllers. Action required: Ensure that NSX Controller cluster is in healthy state before preparing a new Host. Invoke Controller Sync API to try and rectify this error. Frequency of traps: When a new host is prepared for NSX networking. Affects: Newly prepared Host. Communication channel between Host and NSX Controllers might have issues.
                   vmwNsxMVxlanControllerActive 1.3.6.1.4.1.6876.90.1.2.20.0.13
A notification generated when Controller cluster state is now active. Controller Synchronization job is in progress. Frequency of traps: Controller cluster becomes active again from a previous inactive state. Action required: User doesnt have to take any corrective action. NSX will auto-sync the controllers.
                   vmwNsxMVxlanVmknicMissingOrDeleted 1.3.6.1.4.1.6876.90.1.2.20.0.14
VXLAN vmknic is missing or deleted from host. Action required: Issue can be resolved from Logical Network Preparation - VXLAN Transport UI section. Clicking on resolve will try to rectify the issue. Frequency of traps: First time NSX Manager finds that VXLAN vmknic is missing or deleted from Host. Affects: VXLAN Traffic to/from the mentioned Host will be affected.
                   vmwNsxMVxlanInfo 1.3.6.1.4.1.6876.90.1.2.20.0.15
NSX Manager will raise this event when connection between either of the following component is established/re-established (i) connection between NSX Manager and Host Firewall agent. (ii) connection between NSX Manager and Control Plane Agent. (iii) connection between Control Plane Agent to Controllers. Action required: None Frequency of traps: NSX Manager will raise this event when connection between either of the following component is established/re-established (i) connection between NSX Manager and Host Firewall agent. (ii) connection between NSX Manager and Control Plane Agent (iii) connection between Control Plane Agent to Controllers.
                   vmwNsxMVxlanVmknicPortGrpMissing 1.3.6.1.4.1.6876.90.1.2.20.0.16
NSX manager detected one vxlan vmknic is missing on VC. Action required: Check the host, if that vmknic is deleted, click on the resolve button on UI, or call the remediate API (POST /api/2.0/vdn/config/host/{hostId}/vxlan/vteps?action=remediate) to recreate the vxlan vmknic. Frequency of traps: First time when vxlan vmknic is detected missing (manually deleted by user or inventory report the incorrect information) Affects: The VXLAN traffic on that host may be interrupted.
                   vmwNsxMVxlanVmknicPortGrpAppears 1.3.6.1.4.1.6876.90.1.2.20.0.17
NSX manager detected one vxlan vmknic that was marked as missing has now reappeared on VC. Action required: None Frequency of traps: When that missing vmknic re-appears again. Affects: The VXLAN traffic on that host may be resumed.
                   vmwNsxMVxlanConnDown 1.3.6.1.4.1.6876.90.1.2.20.0.18
This event is triggered when either of the following connections are detected down by NSX Manager: (i) connection between NSX Manager and Host Firewall agent. (ii) connection between NSX Manager and Control Plane Agent. (iii) connection between Control Plane Agent to Controllers. Action required: (i) If NSX Manager to Host Firewall Agent connection is down, check NSX Manager and Firewall Agent logs to get error details. You can try Fabric Synchronize API to try and retificy this issue. (ii) If NSX Manager to Control Plane Agent connection is down, please check NSX Manager and Control Plane Agent logs to get the error detail, check whether the Control Plane Agent process is down. (iii) If Control Plane Agent to Controllers connection is down, please go to UI Installation page to check the connection status for crossponding Host. Frequency of traps: When (i) NSX Manager looses connection with Firewall agent on host or (ii) NSX Manager losses connection with Control plane agent on host or (iii) Control plane agent on Host looses connection with NSX Controllers. Affects: VMs on that Host might get affected.
                   vmwNsxMBackingPortgroupMissing 1.3.6.1.4.1.6876.90.1.2.20.0.19
NSX manager detected one backing portgroup of a logical switch is missing on VCenter. Action required: Click on the resolve button on UI or call the API (POST https:///api/2.0/vdn/virtualwires//backing?action=remediate) to recreate that backing portgroup. Frequency of traps: Whenever logical switch backing portgroup is missing on VC. Affects: VMs cannot be connected to this Logical Switch.
                   vmwNsxMBackingPortgroupReappears 1.3.6.1.4.1.6876.90.1.2.20.0.20
NSX manager detected one backing portgroup of a logical switch that was missing reappears on VC. Action required: None Frequency of traps: Whenever user triggered remediate API on Logical Switch which has missing backing portgroup.
                   vmwNsxMManagedObjectIdChanged 1.3.6.1.4.1.6876.90.1.2.20.0.21
NSX manager detected the Managed Objectid of one backing portgroup of a logical switch changed. Action required: None Frequnecy of traps: This typically happens when user restores a backup of Logical Switch backing portgroup.
                   vmwNsxMHighLatencyOnDisk 1.3.6.1.4.1.6876.90.1.2.20.0.22
NSX manager detected some disk on a NSX Controller has high latency. Action required: Rectify the issue on specified device and controller. Frequency of traps: First time NSX detected this issue as reported by Controller. When this issue gets resolved another Informational event will be raised by NSX Manager indicating the same. Affects: NSX Controller.
                   vmwNsxMHighLatencyOnDiskResolved 1.3.6.1.4.1.6876.90.1.2.20.0.23
NSX manager detected the disk high latency alert on a some disk on a NSX Controller has been resolved. Frequency of traps: First time NSX detected, previously raised disk latency issue has been resolved.
                   vmwNsxMControllerVmPoweredOff 1.3.6.1.4.1.6876.90.1.2.20.0.24
NSX manager detected a Controller Virtual Machine is powered off from vCenter. Action required: Click on the 'Resolve' button on Controller page on UI or call the API (POST https:///api/2.0/vdn/controller/{controllerId}?action=remediate) to power on the Controller Virtual Machine. Frequency of traps: This event wil be raised when controller Virtual Machine is powered off from vCenter. Affects: Controller cluster status might go to disconnected if a controller Virtual Machine is powered off. Any operation that requires an active Controller Cluster may be affected.
                   vmwNsxMControllerVmDeleted 1.3.6.1.4.1.6876.90.1.2.20.0.25
NSX manager detected a Controller Virtual Machine is deleted from vCenter. Action required: Click on the Resolve button on Controller page on UI or call the API (POST https:///api/2.0/vdn/controller/{controllerId}?action=remediate) to clean up NSX manager's database state. Frequency of traps: This event will be raised when Controller Virtual Machine is deleted from vCenter. Affects: Controller cluster status might go to disconnected if a controller Virtual Machine is powered off. Any operation that requires an active Controller Cluster may be affected.
                   vmwNsxMVxlanConfigNotSet 1.3.6.1.4.1.6876.90.1.2.20.0.26
NSX manager detected the VXLAN configuration is not set on the host (would-block issue). And this event indicates NSX Manager tried to rectify this issue by resending the VXLAN configuration on Host. Action required: See KB 2107951 https://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2107951&sliceId=1&docTypeID=DT_KB_1_1&dialogID=40732862&stateId=0%200%2040754197 for more information. Frequency of traps: This event will generate when host preparation task is triggered for a host and Host encounters would-block issue. Affects: It is a notification, no specific guide for the next step.
     vmwNsxManagerMIBConformance 1.3.6.1.4.1.6876.90.1.99
           vmwNsxManagerMIBCompliances 1.3.6.1.4.1.6876.90.1.99.1
               vmwNsxManagerMIBBasicCompliance 1.3.6.1.4.1.6876.90.1.99.1.3
The compliance statement for entities which implement VMWARE-NSX-MANAGER-MIB.
           vmwNsxManagerMIBGroups 1.3.6.1.4.1.6876.90.1.99.2
               vmwNsxManagerNotificationInfoGroup1 1.3.6.1.4.1.6876.90.1.99.2.2
These objects provide details in NSX Manager notifications.
               vmwNsxManagerNotificationGroup1 1.3.6.1.4.1.6876.90.1.99.2.3
Group of objects describing notifications (traps, informs).