text
stringlengths 511
18.1k
|
---|
### Human: Can you tell about NCC Health Check: disk_online_check. ### Assistant: The NCC health check disk_online_check checks if there are offline disks.. The NCC Health Check disk_online_check checks if there are offline or unmounted disks.
Running the NCC Check
It can be run as part of the complete NCC check by running:
nutanix@cvm$ ncc health_checks run_all
or individually as:
nutanix@cvm$ ncc health_checks run_all
You can also run the checks from the Prism web console Health page: Select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every 5 minutes, by default.This check generates a Warning alert A101048 after 6 consecutive failures.
Sample Output
For Status: PASS
nutanix@cvm$ ncc health_checks run_all
For Status: FAIL
nutanix@cvm$ ncc health_checks run_all
Output messaging
Check ID101048DescriptionCheck for the offline and unmounted disksCauses of failureA disk may have failed or was manually removed.ResolutionsPlease refer to KB 1536 for more information.ImpactCluster performance may be degraded.Alert IDA101048Alert TitleOffline and unmounted disks in a clusterAlert Smart TitleDisks mounted at mount_paths on CVM cvm_ip are marked offline. Disk serials disk_serial_number are not mounted on CVM cvm_ip.Alert MessageDisks mounted at mount_paths on CVM cvm_ip are marked offline. Disk serials disk_serial_number are not mounted on CVM cvm_ip.
NOTE: It has been observed that this check displays a WARNING regarding the CVM boot volumes for NC2 clusters on AWS, at the same time you may also observe the same disks shows as not mounted in PE UI. You can safely ignore this warning and no further actions are needed. Engineering is aware of the issue and is working to have it resolved in future versions of NCC.
nutanix@cvm$ ncc health_checks run_all
. Confirm the current disk status and check if the disks are mounted.
In the case of a single disk, check the output of the following:
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
Replace <diskserialnumber> with the serial number of the currently mounted disk.
For multiple disks, check the output of the following:
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
Replace diskserialnumber1 and diskserialnumber2 with the serial numbers of the currently mounted disks. For example, if you enter Z1ZBAHZ0 for disknumber1 and Z1ZBAAEE for disknumber2:
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
Confirm that there are no issues with Hades or offline mounting paths. If there are no issues, a blank output is generated:
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
Now, confirm that there are no phantom entries in the disk configuration json files for the disk serial number you enter:
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
Replace diskserialnumber with the serial number of the currently mounted disk. For illustration, Z1ZBAAEE and Z1ZBAHZ0 are used here. The correct output must look similar to the following:
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
If these conditions are met, proceed to remove the disk IDs with:
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
Replace <disk ID to be removed> with the ID of the disk you want to remove. For example, for disk ID 135646378 and ID 135646379,
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
Wait for the operation to complete. This might take up to 30 minutes, sometimes longer.The Prism web console should no longer contain those erroneous disk entries. Confirm by running this check
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
If a disk is marked or reported as offline and does not have a phantom entry, then the disk is not being utilized by the cluster and might potentially have failed.
As an additional check, you can manually check the failing disk:
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
Replace sdX with the disk device node you want to check.Run the following command from a CVM (Controller VM) to determine the mapping of drive to disk serial number:
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
Run the following command to check based on the above output:
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
Note: For Hyper-V, smartctl is located on the Hyper-V host.
From the CVM, run winsh to get into the PowerShell.
From the PowerShell, go to C:\"Program Files"\Nutanix\Utils\
Run the following command for SSD:
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
Or the following for HDD:
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
Note: The -a parameter is used in smartctl to list all the stats for SSD or HDD. If you do not use the -a parameter, only the test result as either PASSED or FAILED is displayed.
You may be able to check with the DiskSerialNumber in the hades.out logs to understand why the disk was marked for removal and when.
For Example, here DiskSerialNumber is WSE02E46
nutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'
For assistance in obtaining and installing a replacement drive, engage Nutanix Support.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Configuring AOS for VMware Fault-Tolerance on ESXi 6.7. ### Assistant: AOS 5.19.x now supports VMware Fault Tolerance running ESXi 6.7. The Nutanix cluster requires configuring two Stargate timeouts as well as RPC timeouts on ESXi to ensure the timely failover of NFS services.. Starting from AOS 5.19.x, VMware Fault Tolerance is now supported with ESXi 6.7. To enable this feature, two Stargate timeouts, as well as RPC timeouts, must be configured on ESXi hosts to ensure timely failover of NFS services as required to support Fault Tolerant VMs.
In addition to the two Stargate timeouts, the following two parameters value must be increased to 30 seconds on each ESXi host in a cluster:
/NFS/CreateRPCTimeout/NFS/RenameRPCTimeout. Contact Nutanix Support for assistance to set the Stargate timeouts. This configuration change requires a rolling restart of the Stargate process on each node of the cluster.
To set the ESXi host parameters /NFS/CreateRPCTimeout and /NFS/RenameRPCTimeout follow instructions below:
To set the parameters:
[root@ESXi]# esxcfg-advcfg -s 30 /NFS/CreateRPCTimeout
To check the parameters:
[root@ESXi]# esxcfg-advcfg -s 30 /NFS/CreateRPCTimeout. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Nutanix user being locked out from scale-out PC. ### Assistant: Nutanix user is locked on scale out Prism Central. Users cannot login as "nutanix" via SSH to a scale-out Prism Central deployment.
As an admin user via SSH, running the below command, it's possible to notice that the "nutanix" user is locked due to several "su -l" commands:
admin@PCVM:~$ sudo faillock --user nutanix| tail -15
Switch to root with "sudo su -" and look into /home/log/secure logs for the below error message:
admin@PCVM:~$ sudo faillock --user nutanix| tail -15
Another way to identify the issue is to check /home/log/messages file:
admin@PCVM:~$ sudo faillock --user nutanix| tail -15
By tracing one of the failed PIDs in sysstats top.INFO log, it points to the below command:
admin@PCVM:~$ sudo faillock --user nutanix| tail -15
Which turns out to be a command run every minute by the vip_cleanup script:
admin@PCVM:~$ sudo faillock --user nutanix| tail -15
** NOTE **: The script only blocks 2 PCVMs out of 3 since it does not run on the PCVM with the VIP. This is also why it only affects some PCVMs, as there's always a VIP in scale-out deployments. If needed, it's possible to unlock the "nutanix" user with the below command in the affected PCVM:
admin@PCVM:~$ sudo faillock --user nutanix| tail -15. This issue is resolved in pc.2022.6.0.6. Nutanix recommends that the user upgrade Prism Central to pc.2022.6.0.6 or higher. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Prism Central slowness as it receives a huge amount of incoming v1/users/details API calls from PE Clusters causing availability issues. ### Assistant: Prism Central deployment with a large count (hundreds) of managed PE nodes (CVMs) may experience prism gateway service availability problems due to the constant flow of incoming PrismGateway/services/rest/v1/users/details API calls coming from Prism Elements with very high frequency.. Prism Central deployment with a large count (hundreds) of managed PE nodes (CVMs) may experience prism gateway service availability problems due to the constant flow of incoming PrismGateway/services/rest/v1/users/details API calls coming from Prism Elements with very high frequency.Identification:Verify that Prism Central manages multiple Prism Elements with a large total count of nodes in connected PEs (hundreds); To check the total number of managed PE nodes, use the command below:
nutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e node -a | grep entity_id | wc -l
Symptoms may include:
The constant incoming flow of v1/users/details API calls in prism_proxy_access_log.out from connected Prism Elements. The frequency of incoming v1/users/details calls is very high (hundreds per minute)
nutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e node -a | grep entity_id | wc -l
Prism Gateway service may be killed with OOM:
nutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e node -a | grep entity_id | wc -l
You may see frequent entries of pods getting killed with OOM:
nutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e node -a | grep entity_id | wc -l
Some API calls to Prism Gateway service may return HTTP response code 429 Too Many Requests:
nutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e node -a | grep entity_id | wc -l
IAM enabled Prism Central may experience iam-proxy pod OOM restarts:
nutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e node -a | grep entity_id | wc -l
On every Prism Eelement, CVM's ikat_control_plane service sends an excessive number of PC availability checks that cause a DDOS-like effect for PCs managing hundreds of nodes resulting in this issue. Note: Not always the above symptoms are caused by the problem discussed in this article. Technically any source of API requests incoming to PCVM with high frequency may cause similar issues. This KB article is relevant only for PC deployments managing hundreds of nodes and experiencing problems due to connected PEs sending large counts of v1/users/details API calls.. This issue is resolved in AOS versions 6.6.1, 6.5.2, and above. Upgrade the connected Prism Element clusters to the fix versions.
. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Alert - A110026 - Nutanix Cloud Clusters (NC2) - CloudAPIEndpointUnreachable. ### Assistant: This Nutanix article provides the information required for troubleshooting the alert CloudAPIEndpointUnreachable for your NC2 cluster.. Alert Overview
Nutanix Clusters deployed in public clouds are required to make cloud API calls for several operations such as VM IP address assignment validations, VM power-on, VM power-off, etc. These operations may face issues if cloud API calls fail due to underlying cloud infrastructure issues like network connectivity misconfiguration(s), temporary outages, permissions misconfiguration etc. The CloudAPIEndpointUnreachable alert is generated when the backend service detects failure executing cloud API calls due to cloud API endpoint unreachability.
Sample Alert
Check ID110026DescriptionConnectivity to Cloud API Endpoint Failed.Impact Networking operations may be impacted on the affected node(s) in the cluster.Alert Title Cloud API Endpoint UnreachableAlert IDA110026Alert Smart Title Cloud API Endpoint may be down or unreachable: {message}Cause #1Cloud API endpoints may be unreachable due to intermittent network connectivity issues in the underlying public cloud.Resolution #1Check if the cloud infrastructure is down due to an outage.Cause #2Missing NAT gateway on Cluster Management subnet.Resolution #2Ensure the route table for the Cluster Management subnet in the cloud contains a default route (0.0.0.0/0) to the NAT gateway. Add the route if not present.Cause #3Cloud subnets may have incorrect route entries in the route tables.Resolution #3Check if the route tables for all cloud subnets contain the appropriate routes from cloud management console.Cause #4IAM role, policy misconfiguration.Resolution #4Ensure IAM roles and policies are configured correctly.Cause #5Security group(s) and/or firewall misconfiguration.Resolution #5From the cloud management console, check if the inbound and outbound rules are correctly configured in the cloud security group(s). Ensure firewall rules do not block access.. Check if the cloud infrastructure si down due to an outage. To get additional details, login to a CVM in the cluster and check the Network Service logs at "/home/nutanix/data/logs/network_service.out" Validate Cloud API endpoints are functional:
Check cloud components (NAT/Internet GW, Subnet, etc) are not accidentally misconfigured or deleted. Check firewall policies are not misconfiguredCheck route tables contain correct route entries Check network access control list (ACL) and security group are not misconfigured
AWS deployments
Check if the correct IAM role has been assigned to the instance by executing 'curl 169.254.169.254/latest/meta-data/iam/security-credentials; echo' in the CVM. The output should display 'Nutanix-Clusters-High-Nc2-Cluster-Role-Prod'If the role is not attached to the instance, attach it using the steps below:
Open AWS Mgmt Console and type 'EC2' in the search bar. Select EC2 service. Make sure that the AWS region is the one where the Nutanix Cluster is deployed.In the EC2 search bar, select the filter as VPC-ID. Specify the VPC ID in which the cluster is deployed. The VPC ID can be obtained from the NC2 Console page.Select the required instance from the filtered list of instances. Choose Actions, Security, Modify IAM role. Select the IAM role 'Nutanix-Clusters-High-Nc2-Cluster-Role-Prod' to attach to your instance and choose Save.
. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about VM power on failure on Nutanix Cloud Clusters (NC2) on AWS. Error: Failed to PlugNic. ### Assistant: Sharing AWS subnets between multiple NC2 or reusing the AWS subnets results in VM power On task failure.. NC2 does not support the sharing of the same AWS subnet among multiple clusters and reusing of AWS subnets. Other instances or services in the VPC may occupy IP addresses in AHV's pool of IPs. The AWS EC2 API does not provide a method to list used/available IP addresses.If you reuse or share AWS subnets in a cluster or among multiple clusters, Powering on or migrating VM may result in a failure with the message as in the screenshot:Log messages in /home/nutanix/data/logs/acropolis.out on CVM reports on "Failed to PlugNic":
nutanix@NTNX-i-0XXXXXXXXXXX-A-CVM:x.x.x.186:~$ allssh 'grep "Failed to PlugNic for IP" /home/nutanix/data/logs/acropolis.out'. Scenario 1. Multiple NC2 are sharing of the same AWS subnet
Nutanix Cluster A is using Subnet A. VM in the cluster is connected to Subnet ANutanix Cluster B is configured with a Network with Subnet AFailure to power on a VM in Nutanix Cluster B
Solution for Scenario 1
Connect all VMs in Nutanix Cluster A a subnet other than Subnet A. Alternatively, delete all VMs in Nutanix Cluster A.Delete Network with Subnet A via Prism Web Console in Nutanix Cluster A.Run the command from one of the CVM in Nutanix Cluster A:
nutanix@CVM:~$ hostssh 'cloudnet forcerun-gc br0.uvms'
Power on VM in Nutanix Cluster B
Scenario 2. Reusing AWS subnets in NC2
Nutanix Cluster used to use Subnet A in the pastNutanix Cluster is configured with a Network with Subnet AFailed to power on VM in the Nutanix Cluster
Solution for Scenario 2
Connect all VMs in Nutanix Cluster A to a subnet other than Subnet A. Alternatively, delete all VMs in the Nutanix ClusterDelete Network with Subnet A via Prism Web ConsoleRun the command from one of the CVM in Nutanix Cluster
nutanix@CVM:~$ hostssh 'cloudnet forcerun-gc br0.uvms'
Create a Network with Subnet A via Prism Web ConsoleConnect VM in Nutanix Cluster A to Subnet APower on VM in Nutanix Cluster
Scenario 3. IP address occupied by non-Nutanix instance or other AWS object/service in the VPC.
Nutanix Cluster is configured with a Network with Subnet ACustomer adds a non-Nutanix AWS instance, a transit gateway, or other AWS object or service endpoint in the same VPCFailed to PlugNic error similar to the following is seen:
nutanix@CVM:~$ hostssh 'cloudnet forcerun-gc br0.uvms'
Solution for Scenario 3
Confirm the IP address is used intentionally (should be identifiable from AWS console. The error message above will identify the ENI in the InterfaceID field. Run the command from one of the CVM in Nutanix Cluster, to blacklist the IP reported in the error.
nutanix@CVM:~$ hostssh 'cloudnet forcerun-gc br0.uvms'
Retry the operation.
Note: The command cloudnet forcerun-gc br0.uvms is non-disruptive to the running UVMs. It cleans up ENIs that are not in use by UVMs. This is a per-host command therefore the need to do the command with the hostssh. If you only have issues with one host, you can also run the command on a single ahv host. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about How to create long living tokens to integrate CI/CD pipeline in Nutanix Kubernetes Engine. ### Assistant: How to create long living tokens to integrate CI/CD pipeline.. Nutanix Kubernetes Engine is formerly known as Karbon or Karbon Platform Services. Nutanix Kubernetes Engine default token is valid for only 24 hours, which makes integration difficult with external components like CI/CD pipeline and Kubernetes cluster deployed by Nutanix Kubernetes Engine. . Nutanix recommend using the Nutanix Kubernetes Engine API to regenerate the kubeconfig before the 24 hours expiry and integrate that process in your CI/CD workflow. See Nutanix Dev for reference.Alternatively, you can use Nutanix Kubernetes Engine Service Account. This is not recommended as it requires an experienced administrator.In the procedure below, we are going to create a service account for Jenkins integration.
For the sake of simplicity, admin privilege is assigned via ClusterRole.More restricted access can be assigned using RBAC.
Create a service account
$ kubectl create serviceaccount jenkins
Create a role binding based on the permission needed by the application
$ kubectl create serviceaccount jenkins
Extract Service account token.
$ kubectl create serviceaccount jenkins
Clusters running Kubernetes > 1.24: Starting Kubernetes 1.24, Kubernetes won't generate Secrets automatically any longer for ServiceAccounts. This means the command from step 3 will return an empty output on k8s > 1.24 If the cluster is running Kubernetes 1.24 or above, use the alternative step 3a: 3a. Create a service account token secret or create the token manually for the ServiceAccount Option 1: Create a secret of type service-account-token
$ kubectl create serviceaccount jenkins
Then extract the token
$ kubectl create serviceaccount jenkins
Option 2: Create a token manually for the Service Account:
$ kubectl create serviceaccount jenkins
Download a new kubeconfig file from Karbon.Update the token in the kubeconfig file with the token we generated in the steps above.Use the modified kubeconfig in Jenkins pipe to integrate the k8s cluster.
$ kubectl create serviceaccount jenkins
Important Notes:
Nutanix recommends using kubeconfig from Rest API or UI for User logins.If you have to use Service accounts limit restrict the use to service integration like CI/CD pipeline. Avoid using service accounts for general-purpose tasks.Distributing long-lived tokens to users may introduce secret sprawl, thus, administrators must ensure that the tokens are not used for unintended purposes.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Adding or Removing a PCI device causes LSI controller to disconnect.. ### Assistant: Currently if you add a PCI device, physical or virtual, to an ESXi host the PCI bus number enumeration will change. This causes the LSI Adapter to go into a disconnected state on the Nutanix CVM.. Currently if you add a PCI device, physical or virtual to an ESXi host the PCI bus number enumeration will change. This causes the LSI Adapter to go into a disconnected state and the CVM will not power on.
UCS is inherently susceptible to this as you can add and remove as many vNIC's and vHBA's to a service profile as you need, but this has also been observed on the NX-8150 when adding a 10GB NIC card. It may also occur if a customer adds a new 10GB NIC if they initially only purchased and set up the cluster with 1GB NIC's.
To identify this, check on the individual hosts configuration under Configuration > Advanced Settings you will see the message "No devices currently enabled for passthrough"
The CVM itself will not power on. If you look at the VM settings you will see "Unavailable Device" for the PCI attached device 0.. To resolve this, in the respective host vSphere instance, go to the Host Configuration > Advanced Settings page and click Configure Passthrough on the right-hand side. From here, find the LSI Logic (LSIModel) device(s) / ID(s), check the box and click Ok to enable it. Some nodes, like the NX-8150-G5, can have one to three LSI HBA cards.
Next, reboot the host for the configuration to take effect. After reboot, the controller should be listed as below.Once the host is back up, edit the CVM configuration in vCenter. Remove the old PCI device(s), and add the new one(s). Select the LSI adapter(s) where PCI Passthrough just made active, and save.After hitting OK, power on the CVM.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Prism Central - pc.2021.9.0.5, pc.2022.1, or later: Unable to access PE via Cluster Quick Access after upgrading the AOS of the PE. ### Assistant: After upgrading AOS in a Prism Element which has been managed by a Prism Central that version is pc.2021.9.0.5, pc.2022.1 or later, you may not be able to access the Prism Element by using Cluster Quick Access feature of the Prims Central.. After upgrading an AOS cluster that is managed by a Prism Central running version pc.2021.9.0.5, pc.2022.1, or later, the user may not be able to access the Prism Element by using the Cluster Quick Access feature of the Prism Central.Behind the scene, the Prism service in Prism Central downloads the new Prism UI data from the upgraded Prism Element. This Prism service may have downloaded the new Prism UI data from PE but may not extract it for Cluster Quick Access use.
In the /home/nutanix/data/logs/prism_gateway.log in the Prism Central VM, you may find the ProtobufZNodeManagementException$BadVersionException exception happening with the following stack trace almost every minute.
nutanix@PCVM:-$ cat ~/data/logs/prism_gateway.log | grep 'ProtobufZNodeManagementException$BadVersionException'
If the Prism service has been restarted, you may see this ProtobufZNodeManagementException$BadVersionException once after the restart, and also you may see NullPointerException happening almost every minute, as the followings.
nutanix@PCVM:-$ cat ~/data/logs/prism_gateway.log | grep 'ProtobufZNodeManagementException$BadVersionException'
nutanix@PCVM:-$ cat ~/data/logs/prism_gateway.log | grep 'ProtobufZNodeManagementException$BadVersionException'
Users may also notice that the Prism service downloaded the Prism UI data from the upgraded PE, but no "console" directory is created.For example, if you have upgraded your PE to AOS 5.20.2, Prism Central will download the "PrismUI.tar.gz" file from the PE, and then this file will be put in the "/home/apache/www/console/el7.3-release-euphrates-5.20.2-stable-9fe17e864a79fb1f4e58eda7a12075f9427ca364" directory. Then Prism Central will extract this file into the "console" directory in the same directory as the "PrismUI.tar.gz" file. But, this "console" directory is not created in this case.
nutanix@PCVM:-$ cat ~/data/logs/prism_gateway.log | grep 'ProtobufZNodeManagementException$BadVersionException'
You may find the delete_old_prismui_files.py script had run and deleted old unnecessary Prism UI data files before the AOS upgrade in the "/home/nutanix/data/logs/delete_old_prismui_files.log" file.
nutanix@PCVM:-$ cat ~/data/logs/prism_gateway.log | grep 'ProtobufZNodeManagementException$BadVersionException'. This issue has been fixed in these Prism Central releases.
pc.2021.9.0.6pc.2022.6.0.6, or later
Please upgrade Prism Central before carrying out an AOS upgrade on PE clusters. If you have already been experiencing this issue, please get in touch with Nutanix Support for recovery.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about NCC Health Check: vmknics_subnet_check. ### Assistant: The NCC health check vmknics_subnet_check verifies if vmknics on ESXi host have IP addresses in the same IP subnet.. The NCC vmknics_subnet_check test verifies if vmknics IP addresses are configured on the same subnet on any given host.
Note: vmknics in the same subnet on the same ESXi host is an unsupported configuration.
Running the NCC Check
Run this check as a part of the complete NCC checks.
nutanix@cvm$ ncc health_checks run_all
Or run the vmknics_subnet_check check individually.
nutanix@cvm$ ncc health_checks run_all
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is not scheduled to run on an interval.
This check does not generate an alert.
Sample output
If multiple vmknics are found with IP addresses in the same subnet, the check reports a warning, and an output similar to the following is displayed.
nutanix@cvm$ ncc health_checks run_all
Output messaging
Check ID106442DescriptionCheck if vmknics have different subnetsCauses of failurevmknics have ip address configured in the same subnetResolutionsCorrect the IP addressing in the network. Review KB 2722 for more details.Impactvmknics in the same ip subnet on the same esxi host is unsupported.. If vmknics_subnet_check reports a WARN status, correct the IP address on the ESXi host that is reported by the NCC.
Note: Nutanix does not recommend using two vmknics in the same subnet on ESXi.
When the two vmknics reside on the same subnet, ESXi is unable to split vMotion traffic and other management traffic apart according to the GUI configuration. vMotion is a burst-type workload that uses no bandwidth until DRS or a vSphere administrator starts a vMotion (or puts a host into maintenance mode). But when a vMotion starts, the network interface gets saturated as the vMotion is also used for the Nutanix Cluster. Nutanix recommends using two different subnets.
Verify the vmknics configuration and rectify the configuration so that the vmknics belong to different subnets.
After the problem is resolved and the vmknics are in different subnets or one of the vmknics is removed, restart the CVM (Controller VM) to read the network configuration changes.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Nutanix Files: Unable to access TLDs after rename. ### Assistant: Nutanix Files 4.3 has a regression causing TLDs to be unaccessible after renaming.. Nutanix Files 4.3 has a regression that causes TLDs to be inaccessible after renaming them. This only applies to TLDs created on Nutanix Files versions pre-3.5.Below is the error the client will get when accessing the TLD.
\\<File Server Name>\<Share Name>\<TLD Name> is unavailable.
Verification:Run the below on any FSVM to verify that the FSVMs cannot locate the metadata for the TLD. Replace <Name of TLD> with the actual name of the TLD
\\<File Server Name>\<Share Name>\<TLD Name> is unavailable. . This issue has been resolved in Nutanix Files 4.3.0.1. Please upgrade to that version or later. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Nutanix Files - Recall request has failed. ### Assistant: This article describes an issue where recall request fails on a Nutanix Files cluster with tiering enabled after being upgraded to version 4.3. This results in files not being accessed from the s3 store.. The issue occurs on clusters with Tiering enabled after upgrading the Files cluster from ≥ 4.0 and < 4.3 to version 4.3. Recall request fails, resulting in files not being accessible from the s3 store.
Identifying the problem
In the "Data Lens Dashboard", under the Tasks, you see a failed "Recall Request".When you manually recall tiered data, the task either fails after a prolonged time or takes longer than usual to succeed.If the recall request is successful, the recalled file is partially full or cannot be opened due to an error.The base URL value of the tiering profile is prefixed with the bucket name as "bucket_name.endpoint":
nutanix@FSVM$ afs tiering.list_profiles
. Nutanix is aware of the issue and is working on a resolution. Contact Nutanix Support for immediate assistance.
Collecting additional information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB-2871.Run a complete NCC health_check on the cluster. See KB-2871.Collect Files related logs. For more information on Logbay, see KB-3094.
CVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM. Ideally, run this on the Minerva leader as there were issues seen otherwise. To get the Minerva leader IP on the AOS cluster, run:
nutanix@CVM$ afs info.get_leader
Once you are on the Minerva leader CVM, run the following:
nutanix@CVM$ afs info.get_leader
For example:
nutanix@CVM$ afs info.get_leader
Attaching files to the caseTo attach files to the case, follow KB-1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Deploying Witness VM on an ESXi cluster version 6.5 or earlier fails with the error "Selected OVA file is not a valid template". ### Assistant: This article describes investigates an issue with deploying a Witness VM on ESXi 6.5 and earlier.. Deploying Witness VM on an ESXi cluster version 6.5 or earlier will fail with the below message:
Selected OVA file is not a valid template.
The issue is hardware incompatibility, as the Witness VM ESXi deployment OVA file is built from the ESXi 6.7 image.The issue applies only to new Witness VM deployment on ESXi 6.5 or earlier using the OVA file and does not apply to upgrading the Witness VM. The below Witness VM releases are not compatible with deployment on ESXi 6.5 and earlier.5.20, 5.20.1, 5.15.7, 5.15.6, 6.0, 6.0.1, 5.19.2 . Witness VM is forward and backwards compatible with AOS STS and LTS releases.Deploy the 5.15.5 Witness VM build located under the "Other Versions" tab on the Witness VM downloads page. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about NCC Health Check: inode_usage_check. ### Assistant: The NCC check inode_usage_check verifies whether the number of free inodes on CVMs is getting low.. The NCC check inode_usage_check verifies whether the number of free inodes on CVMs is getting low.
An inode contains information (metadata) about a file in a file system, including where the data is stored, file name, file permissions and so on. Each directory and file has a corresponding inode, in case of a big file size – more than one inode. A maximum number of inodes for each filesystem is defined at FS creation time and cannot be increased.
The inode_usage_check returns the following statuses:
PASS if inode usage is at 75 percent or belowWARN if inode usage is between 75 and 90 percentFAIL if inode usage is above 90 percent
Running the NCC Check
It can be run as part of the complete NCC check by running:
nutanix@cvm$ ncc health_checks run_all
or individually as:
nutanix@cvm$ ncc health_checks run_all
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every 5 minutes, by default.This check will generate an alert A1027 after 1 failure across scheduled intervals.
Sample output
For Status: PASS
nutanix@cvm$ ncc health_checks run_all
For Status: WARN
nutanix@cvm$ ncc health_checks run_all
For Status: FAIL
nutanix@cvm$ ncc health_checks run_all
Output messaging
Check ID1004DescriptionChecks if current inode usage is high.Causes of failureInode usage is high.ResolutionsReduce disk usage or replace disk.ImpactCluster performance may be significantly degraded. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert IDA1027Alert Smart TitleDisk Inode Usage High on Controller VM svm_ip_addressAlert TitleDisk Inode Usage HighAlert MessageInode usage for one or more disks on Controller VM svm_ip_address has exceeded inode_usage_threshold%.
. The NCC health check inode_usage_check fails when one or more filesystems on disks is running out of free inodes or the overall cluster storage is running out of free inodes.
Ensure that the inodes are free on the CVM (Controller VM) which is reported in the failure section of the NCC check:
nutanix@cvm:~$ df -i
Example output (note the IUse% column):
nutanix@cvm:~$ df -i
If you observe that /home is indicating a high inode usage percentage, then determine which directory or directories have high inode usage using the following command:
nutanix@cvm:~$ df -i
Note: If you see any other directory (like /home/nutanix/data/stargate-storage/disks/<serial>) with high inode usage percentage, use the following command:
nutanix@cvm:~$ df -i
Once the above outputs are collected, engage Nutanix Support.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Space Accounting - Prism show cluster/container space usage different than total usage from all VMs on it.. ### Assistant: This article explains why space usage on the cluster/container level can be different from total disk usage from all User VMs.. For other Space Accounting issues/questions that are not covered on this page, please take a look at the Space Accounting | Nutanix Storage Usage Guide and FAQ article.When comparing space usage on the cluster/container level with space usage reported in total for all User VMs from this cluster/container, it often can be found these numbers do not match.
Scenario A: comparing cluster-level usage with usage from all User VMs.
Prism UI, VM view table shows all User VMs on the cluster with storage used/allocated for each VM.
It is possible to take the current usage numbers from all User VMs on the cluster and find the total usage number. But total usage from all User VMs almost always will not match with current usage on the cluster (on the storage pool level).
Here is an example, from a lab cluster with only 6 User VMs.One VM has only 1.61 GiB of usage (only guest OS installed), and the remaining 5 VMs have 196.69 GiB of storage usage (usage was created by independently generating some data on the Guest OS level):
In total, all 6 VMs have storage usage of ~986.66 GiB of logical usage, which for an RF2 container should translate into 1.93 TiB of physical usage on cluster/pool level.
At the same time, on the storage pool level (cluster-wide), usage is at ~1.16 TiB of physical usage.
nutanix@cvm$ ncli sp ls | tail -3
This is significantly less than expected for usage on the VMs/VGs level.
Scenario B: comparing container level space usage with usage from all User VMs.
It is possible to compare usage on the container level with total usage from all User VMs/VGs on this container.
Example from the same cluster as in scenario A, where we have a set of Data VMs with an identical amount of data (~196 GiB) written on Guest OS level.Here, usage for a highlighted container named “ctr1_compression” is 25 GiB. But at the same time, on the “Breakdown” view for this container, we can see there is only one VM with two disks and total usage of ~196 GiB.
If a container ha space-saving (data reduction) features enabled, then actual usage on the container level will be noticeably lower than the logical usage on the VM level.
Details on why total logical usage from all User VMs/VGs does not always match with cluster/container usage numbers.Described comparison scenarios will often show differences in space usage on the cluster/container level compared to what reported usage on the VMs/VGs level (in total).This difference can be explained by the fact that logical usage on VM/VG level is not taking into account several factors: Space savings (or Data Reduction), Snapshots, Clones, and system garbage.The table below has some more details that explain how each factor can affect space usage differences reported on different levels (cluster/container vs. VMs/VGs).
Factor to take into account.Details on how this factor impacts space usage on Cluster/Container level vs. VM/VG level.Space savings (or data reduction) features are enabled on containers.
Savings from Compression, Deduplication, and Erasure Coding are only considered on the cluster/container level but not on VM/VG logical level. Container logical usage reported in Prism is obtained from physical usage related to this container, divided by the replication factor of that container. The logical usage on a VM/VG level show usage (amount of written data) on virtual disks without considering replication factor or space-saving techniques, or the real amount of data stored on the physical disks in nodes.
So, due to savings, you can see less usage reported on the cluster/container level compared with VM/VG level.
Snapshots (any snapshots on the Nutanix storage layer).
The usage form snapshots are not reported/reflected in the main VM/VG view or on the container breakdown view in the Prism Element UI. But snapshot usage is part of cluster/container usage. (any snapshots on the Nutanix storage layer).
Due to snapshots-related usage, you can see cluster/container usage numbers are higher than total VM/VG usage numbers (multiplied by RF factor for case of comparison with storage pool usage).
Clones
The Cloned VM/VG will report the same logical usage as the original VM, but the shared amount of data is stored only once and not multiplied on the physical layer as a result of each cloning operation.
So, due to shared usage from cloning – usage numbers on Cluster/Container level will be less than you can get in total on the VM/VG logical layer.
Example: One VM with 1 TiB usage was cloned 5 times. So, in Prism on VM view (or in container breakdown view), you will see 6 VMs, each with 1 TiB logical usage. But on the container level, you will see logical usage of 1 TiB (initial usage of the original VM). And only if/when each cloned VM completely overwrites shared data (so nothing is shared) – container logical usage will match with total VM logical usage (if no other factors are in play).
System garbage/overhead. There are different types of garbage can exist in a cluster. When some data is deleted from the cluster/container (it can be a whole VM, VM disk, snapshot, or it is just some data/files deleted by a user inside Guest OS) – space is not immediately returned as free. Nutanix cluster usually needs some time to reclaim deleted data and return it as free space. The time required for the reclamation depends on the amount of data to be reclaimed and which data reduction techniques are in use. And all deleted but not reclaimed yet data is considered garbage. Also, even if there were no manual actions taken by the user to remove something from the cluster (like VMs/Disks/Snapshots removed, or some data deleted from the Guest OS layer) – the Nutanix cluster is still expected to have some amount of garbage generated due to data overwrites which happens normally during UVMs/Applications work (it can be generated also due to old DR snapshots automatically expiring). The amount of garbage is heavily dependent on workload patterns in User VMs and on storage container configuration. On heavily loaded clusters the amount of garbage often can reach 5-10% of total usage, and it usually fluctuates in accordance with workload changes (depending on the time of the day for example) and with automatic cleanup/reclamation that happens on a cluster periodically (driven by Curator service). Garbage usage is only accounted for on a Cluster/Container usage layer, and it is not considered on the VM/VG logical usage layer. So, due to some amount of garbage present in any Cluster, it is expected to see usage on Cluster/Container level to be higher than expected VM/VG logical usage (if there are no other factors in play here).
Note: Remember that all these factors are typically in play simultaneously (if all of them are in use/enabled), and often they can compensate for each other. For example, additional usage from snapshots can be partially/fully compensated by savings from Erasure Coding. Ultimately, logical usage on the container will match total logical usage from VMs/VGs.. Main recommendation: total logical usage from VM/VG level should not be directly compared with usage on cluster/container level without considering all factors mentioned in the description section of this KB.If you believe that cluster/container usage is not correct for the amount of data in User VMs/VGs - then each mentioned factor should be deeply analyzed to understand where the difference in usage comes from.It is recommended to start the analysis process by checking:
The Storage Overview view in Prism Element to see the data reduction ratio, overall efficiency ratio, amount of savings, and other details.Prism Central - your cluster in question - Capacity Tab: Storage View - this can give a quick overview of current and historical live/snapshot/garbage usage numbers on a cluster or container level.KB Space Accounting | Are the space-saving settings on my containers configured correctly - to see best practices and how to check your current configuration and the amount of savings.KB Space Accounting | Identifying Snapshots on a Nutanix Cluster to learn how to identify snapshots in a cluster and find their reclaimable space.More details about the implementation of different data reduction and data avoidance techniques can be found in the Data Efficiency tech note.
For other Space Accounting issues/questions that are not covered on this page, take a look at the Space Accounting | Nutanix Storage Usage Guide and FAQ article.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about NCC Health Check: esx_product_locker_setting_check. ### Assistant: The NCC health check esx_product_locker_setting_check checks for ESXi ProductLocker settings.. The NCC health check esx_product_locker_setting_check checks for ESXi ProductLocker settings. This check will generate a FAIL if the host contains a ProductLocker symlinked to any container.This check was introduced in NCC 4.1.0.
Running NCC Check
Run this check as part of the complete NCC Health Checks
nutanix@cvm$ ncc health_checks run_all
Or run this check separately:
nutanix@cvm$ ncc health_checks run_all
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every day
Sample outputFor Status: PASS
nutanix@cvm$ ncc health_checks run_all
For Status: FAIL
nutanix@cvm$ ncc health_checks run_all
For Status: INFO
nutanix@cvm$ ncc health_checks run_all
Output messaging
Check ID106420DescriptionCheck ESXi ProductLocker settingsCauses of failureThe host contains a ProductLocker location that is not properly configuredResolutionsManually configure the ProductLocker location. Review KB 10204.ImpactCVMs appear to be stunned or frozen periodically on an ESXi cluster, potentially causing instability of cluster services and VMs. This issue can affect multiple CVMs at once, causing a complete cluster outage.. Open SSH connection to one of the CVMs as "nutanix" user and execute the below command to identify the /productLocker symlink:
[nutanix@CVM] hostssh 'ls -latr / | grep ^l'
Example output:
[nutanix@CVM] hostssh 'ls -latr / | grep ^l'
Map the container:
[nutanix@CVM] hostssh 'ls -latr / | grep ^l'
In the above example, we can see the /productLocker symlink is pointing to a non-local datastore. You may see the following ESXi setting changed to the path seen below in the String Value line:
[nutanix@CVM] hostssh 'ls -latr / | grep ^l'
Note: If the productLocker String Value points to a Metro Protected datastore, engage Nutanix Support for remediation. If Metro Availability is disabled, do not re-enable and engage Nutanix support immediately. Re-enabling Metro in this state can cause CVMs to lockup and restart, leading to a potential outage.In order to resolve this issue, reconfigured ESXi hosts to point productLocker back to local SATADOM (default config). For more information on this, you can refer to the below VMWare article. Ensure the change is effective.
In case the above-mentioned steps do not resolve the issue, engage Nutanix Support, gather the output of "ncc health_checks run_all" and attach it to the support case. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about ESXi - Rename an ESXi host in a Nutanix Environment. ### Assistant: This article provides a proper way to rename an ESXi host, especially after running a bare-metal Foundation install. This also includes non-configurable components which needs to be checked before changing any aspect on Esxi hosts.. For changing the hostname of an ESXi host, as far as Nutanix is concerned, renaming a host is an instant and secure procedure, and can be done by following VMware instructions, from the below VMware Article: Changing the name of an ESX or ESXi host (1010821).
The following ESXi settings should NOT be manually changed as some of them are non-configurable config and others have dependency on Esxi functionality (Please involve VMware technical Support in case there are any concerns on these components):
NFS datastore settingsVM swapfile locationVM startup/shutdown orderCVM NameCVM virtual hardware configuration file (.vmx file)Each AOS version and upgrade includes a specific CVM virtual hardware configuration. Therefore, do not edit or otherwise modify the CVM virtual hardware configuration file.iSCSI software adapter settingsvSwitchNutanix standard virtual switchvmk0 interface in port group "Management Network"vSwitchNutanix standard virtual switchSSH enabledHost firewall ports that are open
For a complete information please refer to the Non-configurable Esxi Components outlined in Non-Configurable ESXi Components.. For reference, you can check the below steps to change the hostname:
Log in to the ESXi host to rename through SSH or DCUI. Display current hostname:
root@esxi# hostname
Run the following command to change the hostname:
root@esxi# hostname
Run the following command to set the fully qualified domain name. Note that --host and --fqdn are mutually exclusive.
root@esxi# hostname
Display the new hostname and fqdn:
root@esxi# hostname
To clear the domain name and leave only the hostname:
root@esxi# hostname
root@esxi# hostname
Depending on the ESXi version, changes might be reflected automatically or imply a service restart.
ESXi 5.1 / ESXi 5.5: After changing the hostname, execute the following command in the ESXi host. Prism will reflect the changes after about 30 seconds. Be aware that restarting hostd is a mildly disruptive operation.
root@esxi# hostname
ESXi 6.x: Prism will reflect the changes automatically without restarting any service.
There is a known limitation with the NFSv3 implementation in VMware. The NFS lock files (.lck) in the VM directory will have the name of the previously named host no matter if existing, moved via VMotion or newly created VMs. This limitation only gets fixed by rebooting the ESXi host. We recommend rebooting the host after a host rename due to the fact that troubleshooting as well as log analysis can be more difficult. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about NCC Health Check: cfs_fatal_check. ### Assistant: NCC health check cfs_fatal_check is a check to determine if the CFS process is in a stable state.. The NCC health check cfs_fatal_check is a check to determine if the CFS (Collector Framework Service) process is stable. The CFS process resides under the cluster_health service and sends Pulse data to Insights. The check will trigger if the CFS process has restarted at least 4 times in the past 2 hours. Until the CFS process has stabilized, sending remote support/pulse Insights data and proactive support may be delayed.Before running this check, upgrade NCC to the latest version. This check was introduced in NCC 4.6.1.Running the NCC CheckYou can run this check as part of the complete NCC Health Checks.
nutanix@cvm$ ncc health_checks run_all
Or you can run this check separately.
nutanix@cvm$ ncc health_checks run_all
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every 7200 seconds.This check will generate the CFS process that is not in a stable state alert.
Sample Outputs
For Status: PASS
nutanix@cvm$ ncc health_checks run_all
For Status: Warning
nutanix@cvm$ ncc health_checks run_all
Output messaging
Check ID140005DescriptionThis is a check to determine if the CFS process is stable. The CFS process resides under the cluster_health service and sends Pulse data to Insights. The check will trigger if the CFS process has restarted at least 4 times in the past 2 hours.Causes of failureThe CFS process could repeatedly restart if it hits an unrecoverable error, or the CFS process fails to perform a task dependent on the filesystem or zookeeper process, or if it is killed by Out Of Memory (OOM) killer.ResolutionsThe CFS process sits under the cluster_health service on the Controller VM. Verify that the cluster_health service on the affected node is running. Check the cfs.out and cluster_health service logs for more details about the cause of the crash.ImpactInsights data and proactive support can be delayed until the CFS process has stabilized.. NOTE: cfs_fatal_check may show WARN when Pulse was disabled, this is a false positive and has been fixed with NCC-4.6.3. Upgrade to the latest NCC version for a permanent fix of this issue. The CFS process could repeatedly restart if it hits an unrecoverable error, or CFS process fails to perform a task dependent on the filesystem or zookeeper process, or if it is killed by out of memory (OOM) killer.
Verify that the CFS service is running on the cluster:
nutanix@CVM:~$ ps aux | grep /home/nutanix/ncc/bin/nusights/cfs | grep -v grep | awk '$11 == "/home/nutanix/ncc/bin/nusights/cfs" { print $0 }'
Check if the CFS service has recently crashed. In the below command, the CFS process has been running for 2 days, 6 hours, 2 minutes and 33 seconds.
nutanix@CVM:~$ ps aux | grep /home/nutanix/ncc/bin/nusights/cfs | grep -v grep | awk '$11 == "/home/nutanix/ncc/bin/nusights/cfs" { print $0 }'
Verify the cluster_health service is running and check for recent fatals in the health_server.log. Note that a large number of PIDS next to the cluster_health service can indicate that the CFS process has been restarting:
nutanix@CVM:~$ ps aux | grep /home/nutanix/ncc/bin/nusights/cfs | grep -v grep | awk '$11 == "/home/nutanix/ncc/bin/nusights/cfs" { print $0 }'
Restart the cluster_health service to attempt to stabilize the CFS process:
nutanix@CVM:~$ ps aux | grep /home/nutanix/ncc/bin/nusights/cfs | grep -v grep | awk '$11 == "/home/nutanix/ncc/bin/nusights/cfs" { print $0 }'
Monitor the stability of the CFS process by re-running this NCC check or via steps 1 and 2.
If the above steps do not resolve the issue, consider engaging Nutanix Support. Additionally, gather the following command output and attach it to the support case:
nutanix@CVM:~$ ps aux | grep /home/nutanix/ncc/bin/nusights/cfs | grep -v grep | awk '$11 == "/home/nutanix/ncc/bin/nusights/cfs" { print $0 }'. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about NCC Health Check: check_license_compliance, weekly_pre_expiry, daily_pre_expiry. ### Assistant: The NCC checks check_license_compliance, weekly_pre_expiry, daily_pre_expiry verifies whether the cluster license is valid or is about to expire.. The NCC Health Check check_license_compliance reads the license-related information and verifies if the license on all the nodes is valid.
This check performs the following:
Checks if the license is about to expire.Checks license feature compliance.Checks license standby mode.Checks if all nodes have valid licenses.Checks for license capacity.Checks for Nutanix Calm license violation.Checks if Nutanix Calm license is about to expire.Checks if Nutanix Objects license is about to expire.
NCC Health Checks weekly_pre_expiry and daily_pre_expiry are added in NCC 4.6.5 and verify if installed licenses are about to expire.
The check behavior depends on the number of days before license expiration:
12 to 6 weeks to license expiration, an Info alert is generated weekly by the weekly_pre_expiry check.6 to 2 weeks to license expiration, a Warning alert is generated weekly by the weekly_pre_expiry check.2 weeks before license expiration, a Critical alert is generated daily by the daily_pre_expiry check.
Running the NCC Check
You can run this check as part of the complete NCC Health Checks:
nutanix@cvm$ ncc health_checks run_all
Or run the checks separately:
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
Sample output
For status: PASS
nutanix@cvm$ ncc health_checks run_all
For status: INFO
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
For status: ERR
nutanix@cvm$ ncc health_checks run_all
For status: WARN
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
For status: WARN (From NCC-4.1.0 onwards)
nutanix@cvm$ ncc health_checks run_all
Output messaging
Check ID14000DescriptionCheck if the license is about to expire.Causes of failureThe license is set to expire or has expired.ResolutionsContact Nutanix to extend the expiration of the license or apply a new license.ImpactThe cluster will be out of licensing compliance.Alert IDA1075Alert TitleLicense ExpiryAlert MessageDetailed license expiry info: license_expiry_msg.ScheduleThis check is scheduled to run every hour.Number of failures to alertThis check will generate an alert after 1 failure.
Check ID14001DescriptionCheck license feature compliance.Causes of failureThe cluster was configured to use features not allowed by the license.ResolutionsApply a license with the necessary entitlement or stop using the unlicensed features.ImpactThe cluster is out of licensing compliance.Alert IDA1077Alert TitleLicense Feature ViolationAlert MessageFeatures not permitted by license(s) applied are being used. Non-compliant features: feature_violation_listScheduleThis check is scheduled to run every hour.Number of failures to alertThis check will generate an alert after 1 failure.
Check ID14002DescriptionCheck license standby mode.Causes of failureThe license file has not been applied after cluster summary file generation.ResolutionsComplete the licensing workflow.ImpactThe cluster is out of licensing compliance.Alert IDA1085Alert TitleLicensing workflow is incompleteAlert MessageLicensing process for this cluster is incomplete.ScheduleThis check is scheduled to run every hour.Number of failures to alertThis check will generate an alert after 4 consecutive failures across scheduled intervals.
Check ID14003DescriptionCheck if all nodes have valid licenses.Causes of failureNodes in the cluster have different types of licenses.ResolutionsContact Nutanix to extend the expiration of the license or apply a new license.ImpactThe cluster is out of licensing compliance.Alert IDA1086Alert TitleLicense Node/Core InvalidAlert MessageCluster has nodes/cores which don't have sufficient valid licenses: license_insufficiency_listScheduleThis check is scheduled to run every day.Number of failures to alertThis check will generate an alert after 1 failure.
Check ID14004DescriptionCheck for license capacityCauses of failureThe license capacity for the cluster is non-compliant.ResolutionsApply a new license.ImpactThe cluster is out of licensing compliance.Alert IDA14004Alert TitleLicense InvalidAlert MessageLicensed capacity is less than cluster capacity. license_capacity_msgScheduleThis check is scheduled to run every day.Number of failures to alertThis check will generate an alert after 1 failure.
Check ID14005DescriptionNutanix Calm License ViolationCauses of failureThe total number of VMs and containers managed by Calm exceeds the licenses applied.ResolutionsPlease upgrade your Calm license with additional license packs. If you cannot resolve the issue, please contact Nutanix support.ImpactThe cluster is out of licensing compliance.Alert IDA14005Alert TitleNutanix Calm License ViolationAlert MessageNutanix Calm usage is above the licensed count by 'more_required_packs' license pack(s).ScheduleBefore NCC 4.6.5: this check is scheduled to run every 30 minutes. NCC 4.6.5 and newer: this check is scheduled to run every 60 minutes.Number of failures to alertThis check will generate an alert after 2 consecutive failures across scheduled intervals.
Check ID14006DescriptionCheck if Calm license is about to expire.Causes of failureThe Calm license is set to expire or has expired.ResolutionsContact Nutanix to extend the expiration of the Calm license, or apply a new Calm license.ImpactThe Calm instance will be out of licensing compliance.Alert IDA14006Alert TitleCalm License ExpiryAlert MessageThis cluster_type has license_type license with expiry date of license_expiry_date. The calm license expiry_str.ScheduleBefore NCC 4.6.5: this check is scheduled to run every 30 minutes. NCC 4.6.5 and newer: this check is scheduled to run every 60 minutes.Number of failures to alertThis check will generate an alert after 1 failure.
Check ID14007DescriptionNutanix Calm Trial License ExpiredCauses of failureNutanix Calm Trial License of 60 days is set to expire or has expired.ResolutionsPlease upgrade your Calm license with additional license packs. If you cannot resolve the issue, please contact Nutanix support.ImpactThe cluster is out of licensing compliance.Alert IDA14007Alert TitleCalm Trial License ExpiryAlert MessageNutanix Calm 60 days Trial License expiry_remaining_daysScheduleBefore NCC 4.6.5: this check is scheduled to run every 30 minutes. NCC 4.6.5 and newer: this check is scheduled to run every 60 minutes.Number of failures to alertThis check will generate an alert after 1 failure.
Check ID14008DescriptionCheck if licenses are about to expire.Causes of failureThe license is set to expire or has expired.ResolutionsContact Nutanix to extend the expiration of the license, or to apply a new license.ImpactThe Prism Central instance will be out of licensing compliance.Alert IDA14008Alert TitlePC License ExpiryAlert MessageDetailed license expiry info: license_expiry_msgScheduleThis check is scheduled to run every hour.Number of failures to alertThis check will generate an alert after 1 failure.
Check ID14009DescriptionCheck if Storage licenses are in capacity compliance.Causes of failureTotal storage usage exceeds the license applied.ResolutionsPlease upgrade your license with additional capacity. If you cannot resolve the issue, please contact Nutanix support.ImpactThe Prism Central instance will be out of licensing compliance.Alert IDA14009Alert TitleStorage License OvershootAlert MessageStorage usage is out of compliance. Detailed info: license_insufficiency_listScheduleThis check is scheduled to run every hour.Number of failures to alertThis check will generate an alert after 1 failure.
Check ID14010DescriptionCheck if User-VM-based licenses are in capacity compliance.Causes of failureTotal number of user VMs exceeds the licensed quantity.ResolutionsUpgrade the license with additional capacity or reduce the number of User-VMs in use. If this does not resolve the issue, contact Nutanix support.ImpactThe Prism Element cluster is out of licensing compliance.Alert IDA14010Alert TitleUsers-based License OvershootAlert MessageTotal number of users is out of capacity compliance. Detailed info: license_insufficiency_listScheduleThis check is scheduled to run every hour.Number of failures to alertThis check will generate an alert after 1 failure.. Apply a valid cluster license to resolve this alert. Refer to License Manager Guide for more information about license management.If a newly purchased node was recently added to the cluster and the NCC check reports an INFO status with a message similar to the following, a new license file needs to be re-generated.
INFO: Cluster has nodes which don't have Pro license. List of nodes with invalid license [u'No license for 1 nodes of model NX-3060-G4']
To re-generate a new license file, see the Nutanix Licensing Guide.
You can also disable the Prism Pro features mentioned in the Licensing Guide.
To show the feature list for your license, see Displaying License Features and Details.
If the check reports an ERR message, or if the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support.Known issues:For Check ID 14010, you may see instances where the check reports on the 'number of users' instead of the 'number of User-VMs.' The formula used to calculate VM license utilization is as follows:
INFO: Cluster has nodes which don't have Pro license. List of nodes with invalid license [u'No license for 1 nodes of model NX-3060-G4']. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Genesis crashing "No vmknic exists in same subnet as eth0/eth2". ### Assistant: This article describes an issue where genesis is crashing with the message "No vmknic exists in same subnet as eth0/eth2.". While adding a new node to a cluster, the node is not being discovered because genesis is crashing and Foundation is not starting.
genesis.out log shows:
nutanix@CVM~$ less ~/data/logs/genesis.out. There are two possible reasons for the issue:
The CVM eth0 has an APIPA address. Modify /etc/sysconfig/network-scripts/ifcfg-eth0 and assign a static IP address, gateway and network mask. Change BOOTPROTO="dhcp" to BOOTPROTO="none".
Here is an example of an ifcfg-eth0 output:
nutanix@CVM~$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
Host vmk port has APIPA address assigned. The Host vmknic IP configuration can be checked with esxcfg-vmknic -l. Assign a static IP either through IPMI/DCUI or if there is an ssh connection through the CVM using the command below:
nutanix@CVM~$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
After making either of the changes above, reboot the CVM:
nutanix@CVM~$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Expand Cluster workflow fails on "Imaging of node" task. ### Assistant: Expanding an ESXi cluster with a node that requires uploading an ESXi iso for imaging may fail if the version of ESXi requested is not compatible with the AOS version running in the cluster.. Expand cluster workflow fails for an ESXi cluster when uploading an ESXi iso to reimage the new node.Clicking through Details hyperlinks of the "Node preparation for expand cluster" task, the task fails with "Imaging of node <new_node_ip> failed. Status: fatal: Running validations".Using allssh 'ls -larth ~/data/logs/foundation/2023*/' (replacing "2023" with the current year), we can locate node_<new_node_ip>.log.
nutanix@NTNX-CVM:~$ allssh 'ls -larth ~/data/logs/foundation/2023*/'
node_<new_node_ip>.log on the previously identified CVM shows the following StandardError. (NOTE: md5sum and versions mentioned can differ)
nutanix@NTNX-CVM:~$ allssh 'ls -larth ~/data/logs/foundation/2023*/'
. Upgrade the cluster to the minimum AOS version mentioned in the error message, or use a lower version of ESXi and then upgrade the ESXi version outside of Prism Element UI to match the other hosts.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about NCC Health Check: ofpfmfc_table_full_check. ### Assistant: NCC 3.9.0. The NCC health check ofpfmfc_table_full_check checks the number of rules configured in Open vSwitch (OVS) tables.. The NCC health check ofpfmfc_table_full_check checks the number of rules configured in Open vSwitch (OVS) tables and alerts if this number is approaching the limit.
For every NIC on every user VM, the acropolis service configures multiple rules inside Open vSwitch (OVS) tables.
If the number of configured rules is more than 80% of the limit, then a warning alert is generated.If the number of configured rules is more than 90% of the limit, then a critical alert is generated.
If the maximum number of rules is reached, then user VMs will not be able to communicate over the network.
This check only applies to Prism Element hosts.
Running the NCC Check
You can run this check as part of the complete NCC Health Checks:
nutanix@cvm$ ncc health_checks run_all
Or you can run this check individually:
nutanix@cvm$ ncc health_checks run_all
You can also run the checks from the Prism web console Health page: Select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every hour, by default.
This check generates an alert after 1 failure and will self-resolve if the number of rules decreases below the threshold.
Sample output
For Status: PASS
nutanix@cvm$ ncc health_checks run_all
For Status: FAIL
nutanix@cvm$ ncc health_checks run_all
For Status: ERR
nutanix@cvm$ ncc health_checks run_all
Output messaging
Check ID200607DescriptionChecks that number of OVS rules is not reaching maximum number of rulesCauses of failureOpenFlow table is getting fullResolutionsConsider turning off unnecessary user VMs or remove unused vNICs from running user VMs on AHV host. For higher number of allowed OVS rules, ensure AOS version at least 5.10.5 and above on cluster.ImpactWhen number of OVS rules will reach maximum, some VMs may lose network connectivity.Alert IDA200607Alert TitleOpenflow Table Getting Full.Alert MessageNumber of OVS rules is num_flow_table_rules for bridge bridge_name on AHV host host_ip. This is more than usage_msg% of maximum allowed number of rules.. The "flow_table_limit" has been increased from 16k to 32k in AOS 5.20 which is bundled with AHV 20201105.2030. If you are running a lower version then consider upgrading AOS and AHV accordingly to avoid hitting the 16k limit in earlier versions.If you are already running AOS 5.20 and AHV 20201105.2030 or newer, and this check returns WARN or FAIL, consider powering off some user VMs or moving them to a different AHV host.
Scenarios that may trigger the flow table to fill up. There are currently 3 known Scenarios that may trigger the OpenFlow Table to have more entries than expected:
Broadcast storm or switching loop in the environment.Trunk port that is allowing all VLANs instead of only VLANs needed in the Nutanix environment.A Cisco ACI environment running in Application Centric Mode using "Flood in Bridge Domain" instead of "Flood in Encapsulation"
TroubleshootingRun the following commands on the AHV host with alerts. The counts from these commands will not be the same but they should be within a few hundred of each other. First, collect a list of bridges configured in OVS:
root@AHV# ovs-vsctl list-br
Sample output:
root@AHV# ovs-vsctl list-br
For each switch run the command:
root@AHV# ovs-vsctl list-br
Based on this output:
Does the number of flows from br.dmx come close to the total number of flows on br0.local + any other vswitch you may have? If "yes" you are most likely hitting a broadcast storm in your environment. Please work with your networking team to rule out a broadcast storm or loop within the NetworkIf br.dmx reports a number MUCH higher than br0.local + other vSwitches you are most likely hitting scenarios 2 or 3.
If you are hitting Scenario 2 or 3 please do the following
Verify that VLANs are pruned to the host by adding a "switchport trunk allowed vlan" statement to the switch ports that only allows VLANs to the AHV host that are needed.If running ACI please engage Cisco to verify if running Flood in Encapsulation will work in your environment and use this instead of Flood in the Bridge domain.If neither of these resolves your issues please engage Nutanix Support to assist with increasing Openflow flow limits.
Known issues
The check may fail with the following error if bridge chaining is disabled:
root@AHV# ovs-vsctl list-br
This issue is resolved in:
AOS 5.15.X family (LTS): AOS 5.15.2AOS 5.18.X family (STS): AOS 5.18
Please upgrade AOS to versions specified above or newer.
It is not recommended to disable bridge chaining. Disabling bridge chaining should only be used for troubleshooting purposes and once troubleshooting is completed, bridge chaining should be re-enabled.
Run the following command on affected AHV hosts to re-enable bridge chaining:
root@AHV# ovs-vsctl list-br
Also restart acropolis on the same node to make sure the change is made aware to other nodes.
In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Ubuntu loads to a blank screen instead of login prompt from VM console.. ### Assistant: VM console shows black screen for Ubuntu VM, despite being able to SSH to the VM.. Ubuntu 16.04 with Kernel version 4.4.0-116 has a known issue that loads OS into a black screen instead of a login prompt (i.e., no display in Prism VM console), while SSH or OS operations are working fine.
root@vm-0:~# uname -a
root@vm-0:~# uname -a. This is a known issue in that kernel release. Upgrade the OS/Kernel to 16.04.5 and above to resolve the issue.As a workaround, use (Alt+F1) to change to tty1 to get to the login prompt. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about VirtIO 1.1.6 or newer installation on Windows 7, Windows Server 2008 R2. ### Assistant: Windows 7 and Windows 2008R2 require SHA2 support for Nutanix VirtIO 1.1.6 to be installed. Installation of VirtIO 1.1.6 or newer may fail on Windows 7 and/or Windows Server 2008R2 with the below error message:
[Window Title]
Or if you install Windows 7 or Windows Server 2008R2 from Nutanix VirtIO 1.1.6 ISO:
[Window Title]. Microsoft deprecated signing of kernel-mode drivers with SHA1 certificates in late 2019, hence as of VirtIO 1.1.6, all drivers are signed with SHA2 certificate only. Windows 7 and Windows Server 2008R2 do not have SHA2 support out of the box.To install Nutanix VirtIO 1.1.6, please make sure that Microsoft KB 3033929 is installed or the operating system is fully updated with Windows Update. For fresh deployments of Windows 7 or Windows Server 2008 R2 from Windows installation ISO on AHV clusters use Nutanix VirtIO 1.1.5. Once OS is installed, please install a fix from Microsoft KB 3033929 and then update VirtIO to 1.1.6.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about USB passthrough support on Nutanix platforms. ### Assistant: Does Nutanix support USB passthrough?. Does Nutanix support USB passthrough?. Nutanix does not support USB passthrough on all platforms/hypervisors. There are no plans to support USB passthrough on Nutanix platforms due to security concerns and due to the fact that it affects features such as live migration, HA, 1-Click upgrades and more.
Alternative
If you need a guest VM to access USB devices, using AnywhereUSB is a valid solution. This solution uses a USB hub accessible over TCP/IP. A driver is installed on the VM and it emulates a USB device connected to the hub as if it was directly attached to the VM. This method is well-known to work and does not affect the features mentioned above. Please see the Digi site for further details: www.digi.com/products/usb-and-serial-connectivity/usb-over-ip-hubs/anywhereusb. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about "Host restore VM locality” task disappears in Prism if the VM migrations fail. ### Assistant: “Host restore VM locality” task may disappear in Prism if the VMs migrations fail. This may be observed if HA is enabled on the cluster and the cluster's memory usage is high.. When the AHV host is restarted, the AOS automatically creates a "Host restore VM locality” task to migrate VMs back to the host where they were initially running. In some cases, you may notice that such a task disappears from the Tasks page in the Prism UI.. Restore VM locality task is run on the best-effort basis as we cannot guarantee that all VMs can be migrated to the original host once it is recovered.It is by design behavior that the "Host restore VM locality” task is hidden on the Tasks page in the Prism UI if we cannot restore locality for all VMs. If locality was restored for all VMs then "Host restore VM locality” tasks will continue to be shown on the Tasks page.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about NCC Health Check: aged_third_party_backup_snapshot_check and aged_entity_centric_third_party_backup_snapshot_check. ### Assistant: The NCC health checks aged_third_party_backup_snapshot_check and aged_entity_centric_third_party_backup_snapshot_check check the cluster for dangling third-party backup snapshots that may be present for a protection domain longer than intended/specified.. The NCC health checks aged_third_party_backup_snapshot_check and aged_entity_centric_third_party_backup_snapshot_check check the cluster for dangling third-party backup snapshots that may be present for a protection domain. If a third-party backup snapshot has existed in the system for longer than the specified period, this check will raise an alert. These backup snapshots are snapshots that are generated from third-party backups (Veeam, Commvault, Rubrik, HYCU, etc.).
Running the NCC Check
Run this check as part of the complete NCC health checks.
nutanix@cvm$ ncc health_checks run_all
Or run the check separately.
Third-Party backup snapshot check:
nutanix@cvm$ ncc health_checks run_all
Entity Centric Specific check:
nutanix@cvm$ ncc health_checks run_all
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every day, by default.
Sample Output
For status: PASS
Third-Party backup snapshot check:
nutanix@cvm$ ncc health_checks run_all
Entity Centric Check:
nutanix@cvm$ ncc health_checks run_all
For status: FAIL
Third-Party backup snapshot check:
nutanix@cvm$ ncc health_checks run_all
Entity Centric Specific check:
nutanix@cvm$ ncc health_checks run_all
Starting AOS 5.15.3 and 5.18.1 and above, backup snapshots created without an expiration time are assigned a default 60 days expiration.From NCC 3.10.1 and higher on AOS version 5.15.3 and 5.18.1 and above, the check will only fail if the snapshots have an infinite expiration (ex 2086) and have exceeded the threshold set for the alert (default 7 days). Output messaging
Check ID110250DescriptionCheck for aged third-party backup snapshots.Causes of failureThird-party backup snapshots are present in the cluster longer than the configured threshold.ResolutionsContact Nutanix support.ImpactAged snapshots may unnecessarily consume storage space in the cluster.Alert IDA110250Alert TitleAged third-party backup snapshots presentAlert Messagepd_name has num_snapshot aged third-party backup snapshot(s) and they may unnecessarily consume storage space in the cluster.
Check ID110263DescriptionCheck for the old entity-centric third-party backup snapshots.Causes of failureEntity-centric third-party backup snapshots are present in the cluster longer than the configured threshold.ResolutionsThe Recovery Point for the below VM needs to be deleted from Recoverable Entities in Prism Central. If the issue persists, contact Nutanix support.ImpactOld snapshots may unnecessarily consume storage space in the cluster.Alert IDA110263Alert TitleOld Entity-Centric Third-Party Backup Snapshots PresentAlert MessageVM vm_name has num_snapshot old third-party backup snapshot(s) and may unnecessarily consume storage space in the cluster.. The check passes if third-party backup snapshots are not present in the cluster longer than the configured threshold. The default threshold is 7 days, but this threshold value can be modified. The threshold can range from 1 day to 365 days.
Administrators may specify how long a third-party snapshot is expected to be present in the cluster through Prism UI. This check is based on the backup schedule settings. A snapshot is created every time the third-party backup job runs - and the previous snapshot is deleted once the backup job completes. Essentially, there should be 1 snapshot per VM at all times unless otherwise is specified within the backup software. The only other time there will be multiple snapshots for a VM is if that VM is configured within multiple backup jobs.
If the check fails, verify that the threshold set for the check matches the backup schedule and/or retention for the snapshots that are linked to the protection domain. If there are multiple backup jobs linked to the protection domain, then the threshold for this check should be equal to the longest schedule.
For example, if there are 4 backup jobs linked to the protection domain:
Backup Job 1 runs dailyBackup Job 2 runs once a weekBackup Job 3 runs once every 2 weeksBackup Job 4 runs only once a month
The threshold for the check should be set to 32 days. If the threshold is set correctly but the check is still failing, it is recommended to reach out to the third-party backup vendor to ensure the DELETE API calls are being sent. If the snapshots are still present after consulting with the backup vendor, consider engaging Nutanix Support at https://portal.nutanix.com.
Aged snapshots from third-party backups can cause unnecessary capacity utilization on the cluster. Nutanix recommends verifying the threshold is consistent with the expected snapshot time frame within the backup software.
To change the threshold for "Aged third-party backup snapshots present":
Log in to Prism ElementFrom the Home drop-down menu, select "Alerts"In Alerts, click on "Configure" and "Alert Policy"Search for alert ID "A110250" or "Aged third-party backup snapshots present" and click the update policy actionConfigure the alert threshold to be consistent with the retention policy of backup software you are using then click Save
Entity-Centric Snapshots:
The workflow for these snapshots is through Prism Central NOT Prism ElementWorkflow to find entity-centric snapshots. Prism Central UI -> Virtual Infrastructure -> VM Recovery Points -> Select the VM
To change the threshold for "Old Entity-Centric Third-Party Backup Snapshots Present":
Log in to Prism Element where the alert is observedFrom the Home drop-down menu select "Settings"In Settings click "Alert Policy"Search for alert ID "A110263" or "Old Entity-Centric Third-Party Backup Snapshots Present" and click the update policy actionConfigure the alert threshold to be consistent with the retention policy of backup software you are using then click "Save"
NOTE: These checks could also be failing if a VM that was once being backed up via the backup software has been removed from the backup configuration and/or removed from the cluster. It is up to the backup vendor to send the DELETE API call for the snapshot(s) once a VM has been removed from the backup configuration. If you suspect that snapshots are being left on the system due to a VM being removed from the configuration, engage the backup vendor for further assistance.
Default expiration in AOS 5.15.3, 5.18.1 and later
Starting AOS 5.15.3 and 5.18.1, backup snapshots created without an expiration time are assigned a default 60 days expiration. Upgrade AOS to 5.15.3, 5.18.1 or later to prevent this issue in the future. However, upgrading a cluster running a previous version of AOS and having backup snapshots without an expiration date will not change those backup snapshots. They will continue to have no expiration date after the upgrade. In this case, engage the third-party backup vendor. If the snapshots are still present after consulting with the backup vendor, consider engaging Nutanix Support at https://portal.nutanix.com.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about How to troubleshoot Network Issues on ESXi in a Nutanix Block. ### Assistant: This article reviews troubleshooting techniques on the physical network layer of ESXi.. If the vSphere when a cluster is configured with Distributed Virtual Switches, health checks are enabled that identify misconfiguration of MTU, VLAN, Teaming and Failover: https://kb.vmware.com/s/article/2032878
Below are some of the things that can be checked when troubleshooting Network issues on an ESXi cluster configured with Standard Virtual Switches:
Check for the link and negotiation status:
~# esxcfg-nics -l
~# esxcfg-nics -l
Note: In some ESXi hosts, ethtool may fail with error "Can not get control fd: No such file or directory." Use esxcli network nic get -n <vmnic#> instead to display the Network Device information. Example:
~# esxcfg-nics -l
Check for link up/down/flapping. Log into the respective host and check the log file /var/log/vmkernel.log:
~# esxcfg-nics -l
Force re-negotiation by shutting down and bringing up a port:
~# esxcfg-nics -l
Check VLAN and MTU settings. Use CDP to find the VLAN and network interface it is connected to:
~# esxcfg-nics -l
Display the VLANs each port group is connected to:
~# esxcfg-nics -l
Check for MTU/duplex issues.
~# esxcfg-nics -l
~# esxcfg-nics -l
Check the interface for network errors. (The NIC interface is vmnic2 in this example. To see the list of NICs use the following command: esxcfg-nics -l):
~# esxcfg-nics -l
Note: For Mellanox, use:
~# esxcfg-nics -l
Check if the vmkernel network interface (vmk1) for vSwitchNutanix is enabled:
~# esxcfg-nics -l
Use the following command to enable the vmk1 interface:
~# esxcfg-nics -l. Resolve physical layer issues:
Move it to a different switch portChange the cableReplace the NIC
Resolve VLAN issues. Check the VLAN tagging in the management port group. It should match the appropriate VLAN, or if it is a native VLAN, remove the tagging.Resolve packet drop and network latency issues:
Check the CVM /home/nutanix/data/logs/sysstats/ping_*.INFO for "unreachable" errors by running the following command:
nutanix@cvm$ cat ~/data/logs/sysstats/ping_*.INFO | egrep -i 'TIMESTAMP|unreach'
Check the CVM for high network latency issues:
nutanix@cvm$ cat ~/data/logs/sysstats/ping_*.INFO | egrep -i 'TIMESTAMP|unreach'
Print any latency beyond 30ms:
nutanix@cvm$ cat ~/data/logs/sysstats/ping_*.INFO | egrep -i 'TIMESTAMP|unreach'
. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about How to Enable Share Clipboard on Zoom. ### Assistant: How to Enable Share Clipboard on Zoom during remote session. How to Enable Share Clipboard on Zoom during the remote session.. 1. Find the hidden toolbar at the top of the shared screen2. Click on "More" (three dots) -> "Remote Control"4. Make sure "Share Clipboard" is ticked. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Nutanix Kubernetes Engine - Kubernetes cluster is unhealthy after reducing the number of worker nodes to fewer than three. ### Assistant: In Kubernetes clusters deployed with NKE 2.7, reducing the number of worker nodes in a Production cluster to fewer than three causes the cluster to be reported as unhealthy due to an insufficient number of calico-typha replicas.. Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.
Calico-typha is a component of the Calico Container Network Interface (CNI) used by NKE to increase scale by decreasing the impact Kubernetes nodes have on the datastore used by Calico. Starting with NKE 2.7, the number of replicas of calico-typha was increased from one to three in Kubernetes Production clusters. As a result of this change, decreasing the number of worker nodes in an NKE Production Kubernetes cluster from three or more nodes to fewer than three nodes will result in the Kubernetes cluster being reported as unhealthy. This is due to the cluster expecting to have three replicas of calico-typha instead of the one replica present after reducing the number of worker nodes to one or two.
When the number of nodes is reduced to fewer than three, the node removal task will hang for several minutes at the "Verifying health after removal of worker node" step until the task eventually fails and reports the following error in the NKE UI:
Remove worker failed to delete worker node: 1 nodes failed to be removed out of 1: failed to get the cluster health: Operation timed out: Calico: Calico unhealthy Max retries done: Failed to verify calico: Operation timed out: expecting 3 available replicas of calico-typha deployment in kube-system namespace. Currently running: 2
In the NKE UI, the status of the cluster is reported as Critical.
The Alerts page in the NKE UI will report the following two types of alerts:
Remove worker failed to delete worker node: 1 nodes failed to be removed out of 1: failed to get the cluster health: Operation timed out: Calico: Calico unhealthy Max retries done: Failed to verify calico: Operation timed out: expecting 3 available replicas of calico-typha deployment in kube-system namespace. Currently running: 2
In the Prism Central UI, the "Remove worker <worker node name> from node pool <node pool name>" task will show as Failed and report the following error:
Remove worker failed to delete worker node: 1 nodes failed to be removed out of 1: failed to get the cluster health: Operation timed out: Calico: Calico unhealthy Max retries done: Failed to verify calico: Operation timed out: expecting 3 available replicas of calico-typha deployment in kube-system namespace. Currently running: 2
The /home/nutanix/data/logs/karbon_core.out log on the Prism Central VM(s) will contain the following type of errors:
Remove worker failed to delete worker node: 1 nodes failed to be removed out of 1: failed to get the cluster health: Operation timed out: Calico: Calico unhealthy Max retries done: Failed to verify calico: Operation timed out: expecting 3 available replicas of calico-typha deployment in kube-system namespace. Currently running: 2
On the Prism Element cluster, the VM removal task for the worker node is successful, and the worker VM is deleted.
Note that this issue does not impact Kubernetes clusters deployed with NKE versions prior to 2.7, as those clusters would have been deployed with a single replica of calico-typha. This issue also does not impact Development clusters deployed with NKE 2.7, as Development clusters are deployed with a single calico-typha replica.. To resolve this issue, increase the number of worker nodes to three or more. For more information on adding worker nodes, see Creating a Node Pool in the Nutanix Kubernetes Engine 2.7 Guide.
Decreasing the number of worker nodes in an NKE Production Kubernetes cluster to fewer than three is not recommended and should be avoided. The NKE UI does not prevent deploying a Production cluster with one or two worker nodes, but deploying a new Production cluster with fewer than three nodes is also not recommended and should be avoided.
If the number of worker nodes cannot be increased to three or more, contact Nutanix Support (login required) for assistance.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about NCC Health Check: ec_enablement_check. ### Assistant: The NCC health check ec_enablement_check checks if erasure coding should be enabled or disabled on a storage container.. The NCC health check plugin ec_enablement_check reports whether Erasure Coding data savings technique would be efficient for each of the storage containers in a cluster.This check recommends enabling erasure coding in cases where estimated EC savings are significant.
Running the NCC Check
It can be run as part of the complete NCC check by running
ncc health_checks run_all
Or individually as:
ncc health_checks run_all
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every 1 day, by default.This check will generate a severity INFO alert A101082 after 1 failure.
Sample output
For status: PASS
ncc health_checks run_all
For status: INFO
ncc health_checks run_all
Output messaging
Check ID101082DescriptionChecks if erasure coding should be enabled on a storage container.Causes of failureErasure coding is recommended to be enabled on this storage container.ResolutionsEnable erasure coding to get space savings.ImpactWorkload is suitable for erasure coding space savings, but EC is not enabled.Alert IDA101082Alert TitleEnable erasure coding on containerAlert Smart TitleEnable erasure coding on container container_nameAlert MessageEnable erasure coding on container container_name to get at least savings_pct% space savings.. Troubleshooting
The NCC ec_enablement_check uses the 18% estimated savings threshold for recommendation to enable EC on specific storage containers in a Nutanix cluster.Not every workload is suitable for erasure coding, as detailed in the Prism Web Console guide.
Note: In NCC version 4.6.5, this check might recommend enabling EC on the NutanixManagementShare, which is a storage container for Nutanix-only features and cannot be updated or deleted in Prism UI.
Nutanix Engineering is aware of the issue and is working on a solution.
Resolving the issue
Enable erasure coding only on the storage containers identified by this NCC check.
Realizing the EC space savings will take at least two Curator full scans to do the encoding and remove the excessive replicas. Curator full scans are scheduled to run every 6 hours by default. For more information on Curator scan types and schedules, see KB 2101.
Collecting additional information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691.
nutanix@cvm$ logbay collect --aggregate=true
Attaching files to the case
Attach the files at the bottom of the support case on the support portal. If the size of the NCC log bundle being uploaded exceeds 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about NCC Health Check: ahv_file_integrity_check. ### Assistant: Introduces in NCC 4.1.0, the AHV file integrity check detects any unexpected changes in the base configuration of AHV host.. The NCC health check ahv_file_integrity_check (AHV file integrity check) detects any unexpected changes in the base configuration of AHV host.This check runs on only AHV, but also only if the cluster has enabled 'AIDE' via NCLI and additional STIG DoDIN APL enablement has been performed on supported AOS/AHV versions. By default, and unless specific steps have been taken to enable this functionality on-demand, this check will return a PASS result.Using "aide" as the file integrity tool, it tracks any changes in critical files on an AHV host filesystem. A list of files and their tracked properties are periodically stored and any changes in the tracked properties of these files will raise a warning alert which in turn triggers the alert workflows configured on the cluster such as email, syslog, SNMP trap, Prism Alert notifications.It can be run as part of the complete NCC check by running:
nutanix@cvm$ ncc health_checks run_all
or individually as:
nutanix@cvm$ ncc health_checks run_all
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.This check runs on each CVM to assess its AHV host. It does not run on Prism Central.This check is scheduled to run once per week.This check produces a WARN alert after a single failure.Sample outputCheck Status: PASS
nutanix@cvm$ ncc health_checks run_all
Check Status: WARN
nutanix@cvm$ ncc health_checks run_all
Check Status: ERR
nutanix@cvm$ ncc health_checks run_all
Output messaging
Check ID111086DescriptionWeekly file integrity check to detect any modifications in the baseline configuration of the host.Causes of failureDetected unexpected modifications in the baseline configuration of the host.ResolutionsRefer KB10414 to review the detected changes.ImpactBaseline configurations of the host might have been changed in an unauthorized manner.Alert IDA111086Alert TitleFile integrity check found modifications.Alert MessageFile integrity tool has found modifications on host {host_ip}.. Investigating a WARNOn detection of any unexpected changes, a file will be created under /var/log/aide/ directory of the affected AHV host.The name of the file will be in the format of file-integrity-tool-YYYY-MM-DD_HH-MM-SS.log. The timestamp in the file name is the time of execution of file integrity tool.For assistance with assessing the log output, engage Nutanix Support.To speed up the resolution time and minimize possible impact, avoid performing any activity on the cluster that would involve a host filesystem changes or reboots (including upgrades). Also, collect and attach the following information to the support case:
A complete NCC report:
nutanix@CVM$ ncc health_checks run_all
A log bundle generated from the cluster. This can be collected through Prism Web Console's health page. Select Actions > Collect Logs. Logs can also be collected through the command line using logbay (KB 6691 - NCC - Logbay Quickstart Guide):
nutanix@CVM$ ncc health_checks run_all
Investigating an ERRIf the output "ERR : Caught error while extracting information using host agent API call" is seen for any node, it may indicate that the periodic file state tracking has not yet run for the first time to initialize the aide database on the affected AHV host. Please wait for the next scheduled run of this check. In case the above-mentioned step does not resolve the issue, consider engaging Nutanix Support.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about NCC Health Check: nvme_raid_check. ### Assistant: The NCC health check nvme_raid_check verifies if the NVMe RAID is functioning correctly.. The NCC health check nvme_raid_check verifies if the NVMe RAID is functioning correctly. The NVMe RAID controller, having two NVMe drives connected in RAID1 mode, serves as the host boot device on G8 platforms. In case of a single drive failure, the hypervisor will still be available and functional.This check contains the following parts:
Checking the RAID status. Checks the RAID status for high availability. FAIL is returned if RAID is in degraded mode.Checking the NVMe disk for uncorrectable ECC errors. If a disk encounters any uncorrectable errors, this check returns FAIL.Checking the wear-out status of the NVMe boot disk. If the NVMe disk is worn out and needs to be replaced, this check returns WARN.Checking the NVMe disk temperature. If the NVMe disk temperature is not in the range given by the vendor, WARN is returned.Checking the RAID rebuild status. Checks whether RAID rebuild is currently in progress. WARN is returned if rebuild is in progress.Checking the firmware of the NVMe boot disk. If the NVMe disk needs an upgrade, this check returns WARN.Checking the firmware of the NVMe RAID Controller. Checks if the RAID controller needs a firmware update.
Running the NCC check
You can run this check as part of the complete NCC Health Checks:
nutanix@cvm$ ncc health_checks run_all
Or you can also run this check separately:
nutanix@cvm$ ncc health_checks run_all
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run
Sample output
For status: PASS
nutanix@cvm$ ncc health_checks run_all
For status: FAIL (on RAID status)
nutanix@cvm$ ncc health_checks run_all
For status: FAIL (on cluster formation)
nutanix@cvm$ ncc health_checks run_all
For status: WARN (on M.2 UECC)
nutanix@cvm$ ncc health_checks run_all
For status: WARN (on M.2 Temperature)
nutanix@cvm$ ncc health_checks run_all
For status: WARN (on NVME wear-out)
nutanix@cvm$ ncc health_checks run_all
For status: WARN (on RAID rebuild)
nutanix@cvm$ ncc health_checks run_all
For status: WARN (on NVME firmware)
nutanix@cvm$ ncc health_checks run_all
For status: WARN (on RAID firmware)
nutanix@cvm$ ncc health_checks run_all
For status: ERR (on could not fetch host hardware raid info)
nutanix@cvm$ ncc health_checks run_all
Output messaging
Check ID106071DescriptionCheck the status of NVMe RAID if degraded.Causes of failureOne of the M.2s connected to the RAID card is offline or has issues. Check alert message for the drive and port details.ResolutionsReplace the failing M.2. Check for the port and drive details in the alert message. Or Contact Nutanix Support.ImpactThe node will be up and running with one M.2. But it has a risk of going down in the absence of redundancy.ScheduleEvery 30 minutes
Check ID106072DescriptionChecks the temperature of each drive and alert if it is not within 0C and 75CCauses of failureThe ambient temperature in the affected host environment may be too high for effective cooling, or airflow through host may be suboptimal.ResolutionsCheck for any airflow obstructions physically near the node and the vents. Make sure air conditioning systems are functional.ImpactContinued operation above temperature thresholds may damage the host boot drives and lead to unplanned downtimeAlert IDA106072Alert Smart TitleTemperature of M2 disk of node host_ip is out of rangeAlert TitleTemperature of M2 disk is out of rangeAlert Messagealert_msgScheduleEvery 30 minsNumber of failures to alert1
Check ID106073DescriptionChecks if the disk has returned UECC errors.Causes of failureBoth M.2s connected to the RAID card have UECC errors.ResolutionsIt is recommended to replace one of the drives having UECC.ImpactContinued operation with UECC errors may lead to unplanned downtime.Alert IDA106073Alert Smart TitleM2 disk of node host_ip, returned UECC errorsAlert TitleM2 disk returned UECC errorsAlert Messagealert_msgScheduleEvery 30 minutesNumber of failures to alert1
Check ID106074DescriptionChecks wear-out level of the M.2.Causes of failureM.2 has worn out and reaching its specification limit.ResolutionsThe disk needs to be replaced.ImpactSlow Performance.Alert IDA106074Alert Smart TitleM2 disk of node host_ip, worn outAlert TitleM2 disk has worn outAlert Messagealert_msgScheduleEvery 30 minutesNumber of failures to alert1G8 False Positive IssuePlease upgrade NCC to Version 4.6.2.1 or higher. There is a false positive condition that is addressed in newer NCC versions. No drive replacement is needed. To verify resolution after upgrading, run the check again: ncc health_checks hardware_checks disk_checks nvme_raid_check.
Check ID106075DescriptionChecks if M2 disk firmware needs to be upgraded.Causes of failureM.2 drive is running outdated firmware and requires an update.ResolutionsReview KB10355 for instructions on upgrading the host boot drive firmware.ImpactM.2 drive may not have up-to-date performance and stability features of more recent firmware.Alert IDA106075Alert Smart TitleFirmware of the RAID M2 disk of node host_ip, needs to be upgraded. Perform inventory from Life Cycle Manager to check and update to the latest firmware version.Alert TitleFirmware of the RAID M2 disk needs to be upgraded.Alert Messagealert_msgScheduleEvery 24 hoursNumber of failures to alert1
Check ID106076DescriptionChecks if the RAID card BIOS, Firmware needs to be updated.Causes of failureThe hardware RAID controller is running outdated BIOS or Firmware version. Please refer to the alert message.ResolutionsReview KB10355 for instructions on upgrading the BIOS, Firmware, or Bootloader of the hardware RAID controller.ImpactHardware RAID controller may not have up-to-date performance and stability features of more recent firmware.Alert IDA106076Alert Smart TitleRAID card BIOS or Firmware or boot loader of node host_ip, needs to be updated.Alert TitleRAID card BIOS or Firmware or boot loader needs to be updated.Alert Messagealert_msgScheduleEvery 24 hoursNumber of failures to alert1
Check ID106077DescriptionChecks the temperature of the RAID card and alert if it is more than 95CCauses of failureThe ambient temperature in the affected host environment may be too high for effective cooling, or airflow through the host may be suboptimal.ResolutionsReview the physical environment of the affected host and ensure air-conditioning systems are functioning and correctly calibrated and confirm the absence of any airflow obstructions around vents and the host itself.ImpactContinued operation above temperature thresholds may damage the host boot drives and lead to unplanned downtime.Alert IDA106077Alert Smart TitleTemperature of RAID card on node {host_ip} is above thresholdAlert TitleTemperature of RAID card is above thresholdAlert Messagealert_msgScheduleEvery 30 minutesNumber of failures to alert1
Note: This hardware-related check executes on the Nutanix NX hardware.. 1. If the check status suggests a firmware update is required:
Firmware for the concerned hardware component (M.2 / Raid Controller) should be upgraded. Steps to be performed:
NVMe M.2 FirmwareNVMe RAID Controller
The M.2 and Controller firmware are end-user upgradable.Upgrade the firmware using LCM (Life Cycle Manager) by running an inventory, ensuring the LCM framework is up to date, and following the prompts to update the required components. For more details, see the Life Cycle Manager Guide.If there is no Internet connectivity, LCM "dark site" process can be used. Refer to the following document: Using the Life Cycle Manager Without Web AccessIf LCM is not an option, refer to KB 6937 for a manual update
2. If the check returns a temperature warning:
The component temperature may be high due to a physical environment condition affecting node cooling, such as blocked or failed chassis cooling fans, ambient node temperature too high in the data center, AC failure, or insufficient hot air exhaust. Steps to be performed:
NVMe M.2 card temperatureNVMe RAID controller temperature
Cross-reference NCC output for fan failures or other high component temperatures.Closely review the physical environmental cooling and heat extraction conditions for the physical node(s) and chassis.Resolve any airflow or cooling issues and re-evaluate.If environmental conditions are optimal, and there are no other signs of node/chassis temperate control issues, contact for further assistance.
3. If the check status returns FAIL or WARN for the below checks, contact Nutanix Support at https://portal.nutanix.com for further assistance.
RAID statusM.2 UECCM.2 wear-out Known issue: In rare conditions, a false positive alert may happen when NCC fails to read the NVMe M.2 data. Nutanix recommends that customers upgrade to NCC 4.6.2.1 or later, which contains the fix for this issue.
4. If the check status returns ERR:
Confirm hypervisor is supported on the hardware platform. Contact Nutanix Support for further assistance.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about How to Configure Network Access on Node Booted into Phoenix or Rescue Shells. ### Assistant: This article describes how to configure network access on a node booted info Phoenix or Rescue shells.. Phoenix is an in-memory Gentoo Linux OS that Nutanix uses for various workflows. If you find that a host in your cluster is unreachable over the network but IPMI console shows a prompt similar to the one below, then you are in the Phoenix Shell.
phoenix / #
You may find your node booted into Phoenix due to any of the following circumstances:
Foundation imagingPhoenix imagingCluster expansionPrism Host Boot Device (satadom) Repair - Proactive or FailedOne-Click firmware upgradeLCM firmware upgrade
There is a similar shell that can appear when booting a CVM with svmrescue.iso. You may find yourself here in a CVM's console view if your Single SSD Repair task has failed.
phoenix / #
This article explains how you can configure non-persistent network access on either of the above shells for the purposes of debugging or log collection.
. Steps to configure network access in a Rescue shell. For the purposes of simplicity I am using Phoenix, but the steps are the same for Rescue.
WARNING: If a VLAN tag is required, DO NOT perform the steps below. Proceed to the next procedure called "If you need to tag a VLAN for access" further below.
1) List your available physical interfaces (ethX).
phoenix / # ifconfig -a
2) Cycle through the interfaces so that they can detect any active links. Do this for all the interfaces listed in the above output.
phoenix / # ifconfig -a
Example:
phoenix / # ifconfig -a
3) Cycle through the interfaces to find the one which can detect a link. Usually this will be a 10000baseT/Full interface. Look for "Link detected: yes".
phoenix / # ifconfig -a
Example:
phoenix / # ifconfig -a
4) Once you have found the interface with a link detected, configure the local host's CVM IP address on the interface.
phoenix / # ifconfig -a
Example:
phoenix / # ifconfig -a
5) Configure the correct netmask.
phoenix / # ifconfig -a
Example:
phoenix / # ifconfig -a
6) Configure the default gateway for the same interface.
phoenix / # ifconfig -a
Example:
phoenix / # ifconfig -a
7) At this point, you should be able to ping the default gateway.
phoenix / # ifconfig -a
Example:
phoenix / # ifconfig -a
8) The next step is to enable SSH access so that you can log in using PuTTY.Foundation/Phoenix < 4.3
phoenix / # ifconfig -a
Foundation/Phoenix 4.3-5.1
phoenix / # ifconfig -a
Foundation/Phoenix >= 5.2
phoenix / # ifconfig -a
9) Now try to SSH into the CVM IP address you just assigned to Phoenix using "root" as the username and "nutanix/4u" as the password.
phoenix / # ifconfig -a
Example:
phoenix / # ifconfig -a
Note: The password will always be "nutanix/4u".Note: If you see a message like "Permission denied (publickey,password,keyboard-interactive)", you'll have to remove the entry for this IP address out of the /home/nutanix/.ssh/known_hosts file and then try again.
If you need to tag a VLAN for access:
Do not perform the above steps for IP configuration if a VLAN tag is required. Start from step 1 below.
1) Add a VLAN tag to the eth0 interface. Replace vlan_tag below with the VLAN used for this subnet:
phoenix / # ifconfig -a
Example:
phoenix / # ifconfig -a
2) Assign an IP and subnet mask to the interface. Replace vlan_tag with the VLAN used for this subnet:
phoenix / # ifconfig -a
Example:
phoenix / # ifconfig -a
3) Assign the default gateway
phoenix / # ifconfig -a
Example:
phoenix / # ifconfig -a
4) To enable shell access, go to step 7 of regular IP configuration above.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about How To Set CPU Power Management States (C-State) in AHV. ### Assistant: This article describes how to disable processor C-States for latency sensitive applications on AHV. Modern CPUs utilized a technology called "C-States" to manage the amount of power that individual processor cores are utilizing. When a core is idle, the server's BIOS will reduce its clock rate, power draw, or both in an effort to make the system more energy efficient. In most cases, this is the desired condition as it can significantly reduce power consumption. The unused power may be used by other CPU cores to increase their frequency (GHz), allowing instructions executing on active CPU cores to complete faster.The number of C-States available depends on the processor make and model and the capabilities of the server BIOS. C-States can be broken down into two general buckets:
Active/Idle C-States - Normal CPU activity, no power savings
C0 - The CPU is actively executing instructionsC1 - The CPU is idle, but fully online (voltage + clock-speed)C1E - The CPU is idle, its clock-speed has been reduced, but is at full voltage
Sleep States - CPU's clock is stopped and voltage reduced (most common states listed)
C2 - CPU clock is stoppedC3 - CPU clock is stopped and voltage reducedC6 - CPU internal/external clocks are stopped, voltage reduced or powered off
However, some application CPU usage patterns are bursty - requiring CPUs to transition from a low speed/voltage to full speed/voltage. If the application requires highly performant, low latency CPU operation, it may be sensitive to CPU cores transitioning from a low-power sleep state to full power/clock-speed. For applications where low latency is more important than power savings, the hypervisor can be instructed to disable processor C-States, preventing the system from powering down CPUs.
These settings are recommended or may be required for applications including Epic Operational Database (ODB), Epic Hyperspace, Intersystem Caché and MEDITECH File Server, Oracle and MS SQL databases, as well as SAP HANA, RDMA (Remote Direct Memory Access) enabled systems and other environments where C-States could add latency.. Before modifying the CPU power states from the operating system, the BIOS has to allow it. Consult the BIOS documentation for the specific platform running the workload and follow the recommendations from the hardware manufacturer for specific applications. Ensure that the BIOS allows managing CPU power states from the OS (probably only an issue on old BIOS levels). AHV can override UEFI/BIOS C-State settings, so you need to use this process in addition to the BIOS settings.
Note that directly editing C-States in the BIOS is not supported for the Nutanix hardware platform as outlined in Release Notes | BMC and BIOS.
Enable the AHV C-State service
There is a service available on AHV that will disable the use of Sleep C-States if the BIOS allows it. To use this service, run the following commands on the CVM to enable it on each AHV host and start it.
On AHV 20190916.x (bundled with AOS 5.15.4 and part of AOS 5.16 and higher):
nutanix@cvm$ hostssh systemctl enable cstate
The cstate service disables C-States C3 and C6. Testing has shown that disabling these two states has the greatest impact on performance. C-States C0, C1, C1E and C2 are not affected.
To verify that the state service is running, execute the following command from a CVM:
nutanix@cvm$ hostssh systemctl enable cstate
Note: The setting will also need to be applied when additional hosts are added to the cluster or if a host is re-imaged.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Alert - A400120 - CalmIDFError. ### Assistant: Investigating CalmIDFError alerts on a Nutanix cluster. Note: Nutanix Self-Service was formerly known as Calm.
This Nutanix article provides the information required for troubleshooting the alert CalmIDFError for your Nutanix cluster.
Alert Overview
The CalmIDFError alert is generated when there are IDF errors in Calm.
Case: Registration of an entity type has failed
This also results in the subsequent failure of migration of this entity. This usually happens because there are a large number of entities present for the given entity type. The connection between Calm and IDF times out (default 5-minute timeout) between entity registration, which causes this issue.
Sample Alert
Block Serial Number: 16SMXXXXXXXX
Output messaging
Check ID400120DescriptionError in Calm related to IDFCause of failureIDF related error in CALMResolutionPlease refer to KB-14577ImpactDB Seed or Calm Entity migration might fail. Calm Service will get impactedAlert IDA400120Alert TitleIDF error in CalmAlert MessageIDF error: message. Troubleshooting and Resolving the Issue
Case: Registration of an entity type has failed
Restarting the nucalm docker container might fix the issue. On restarting, Calm tries to re-register the entities required and if there were intermittent issues related to IDF, they might get resolved.
Steps to restart nucalm container:
Log in to Prism Central (PC).Execute:
nutanix@pcvm$ svmips
This shall output the Prism Central IPs where you need to restart nucalm container.
Log in to each Prism Central IP and execute:
nutanix@pcvm$ svmips
After successfully executing the above command, you should see a list of services running on Prism Central with nucalm: [] indicating the nucalm docker container has stopped.
Now execute:
nutanix@pcvm$ svmips
After successfully starting the container, you should again see a list of services on Prism Central. This time, nucalm should have some values associated with it.
Now, to verify if all the entities were seeded successfully, you can check for the nucalm container logs present at ~/data/logs/nucalm.out. After restarting the container, if you see log traces like the one below, then the registration is still failing.
nutanix@pcvm$ svmips
If you do not see the above error trace, the CALM entities have been registered successfully in the IDF.
If you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com. Collect additional information and attach them to the support case.
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691.
nutanix@pcvm$ svmips
Attaching Files to the Case
To attach files to the case, follow KB 1294.
If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Alert - A160167 - File Server HA state. ### Assistant: Alert to notify when a Files Cluster is in an HA state. This Nutanix article provides the information required for troubleshooting the alert File-Server-HA-state for your Nutanix Files cluster.
Alert Overview
The File-Server-HA-state alert is generated when Files is in a High Availability state.
Sample Alert
Block Serial Number: 16SMXXXXXXXX
Output messaging
Check ID160167DescriptionFile Server HA stateCause of failureFile Server is in High Availability StateResolutionsRefer to KB article 14282. Contact Nutanix support if issue still persists or assistance needed.ImpactFile Server Performance might get affected.Alert IDA160170 Alert TitleFile Server HA stateAlert Message "File Server Performance might get affected."
. Resolving the IssueNote: This alert can trigger as an "INFO" event during Nutanix Files upgrades. This is expected behavior.
Login to one of the FSVMs and confirm that this is not a false positive.
nutanix@NTNX-A-FSVM:~$ afs ha.minerva_check_ha_state
Confirm if FSVMs are up and contactable. The emboldened text below points out where an FSVM is inaccessible.
nutanix@NTNX-A-FSVM:~$ afs ha.minerva_check_ha_state
Confirm via the Prism GUI that the FSVMs are powered on. The name of the FSVM will start NTNX then contain the name of the Files Server. For example: NTNX-Files1-1.If the FSVM is powered on, please open the Console of the FSVM to see if there is any output displayed. Please gather logs using the below guide and attempt restarting the FSVM before providing the logs to Nutanix Support.
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Run a complete NCC health_check on the cluster. See KB 2871.Collect Files related logs. For more information on Logbay, see KB-3094.
CVM logs stored in ~/data/logs/minerva_cvm*
NVM logs stored within the NVM at ~/data/logs/minerva*
To collect the file server logs, run the following command from the CVM, ideally run this on the Minerva leader as there were issues seen otherwise, to get the Minerva leader IP, on the AOS cluster, run
nutanix@NTNX-A-FSVM:~$ afs ha.minerva_check_ha_state
Once you are on the Minerva leader CVM, run:
nutanix@NTNX-A-FSVM:~$ afs ha.minerva_check_ha_state
For example:
nutanix@NTNX-A-FSVM:~$ afs ha.minerva_check_ha_state
Attaching Files to the Case
To attach files to the case, follow KB 1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
Requesting Assistance
If you need assistance from Nutanix Support, add a comment in the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the support team by calling one of our Global Support Phone Numbers. You can also press the Escalate button in the case and explain the urgency in the comment, and then Nutanix Support will be in contact.
Closing the Case
If this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case and Nutanix Support will close the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about AHV ISO shows "PRE-RELEASE/TESTING" during installation. ### Assistant: Manual installation of AHV 20190916.x ISO downloaded from portal may show as "Pre-Release/Testing" on the installation screen.. Manual installation of AHV 20190916.x ISO downloaded from the portal may show as "Pre-Release/Testing" on the installation screen. This can be seen on UI in red.A similar message is observed during the boot in the text mode.. Installation of AHV directly from AHV ISO is not a supported workflow.In order to reinstall AHV, consider using the Host Boot Disk Repair workflow (the section about hardware replacement is not needed in this case and should be skipped).Or follow the steps in KB 3523 to make a bootable Phoenix ISO with bundled AHV and AOS bits.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Alert - A111092 - VM Deployment from Template check. ### Assistant: NCC 4.6: This Nutanix article provides the information required for troubleshooting the alert "VM Deployment from Template check" for your Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert "VM Deployment from Template check" for your Nutanix cluster.Alert OverviewThe "VM Deployment from Template check" alert is generated when VM deployment from a template can not be successfully completed.Sample Alert
Warning: Report on deploy VM to cluster status
Potential ImpactVM is not deployed.Output Messaging
Check ID111091DescriptionReport on deploy VM to cluster statusCause of FailureA PC service or Destination Cluster may be unreachable or undergoing maintenance.ResolutionsCheck connectivity between cluster, Prism Central and the destination Prism Element. Ensure PE-PC connection is healthy on the destination PE cluster and reinitiate the operation to make up for the partial failure.ImpactSome VM(s) will not be present on the Cluster.Alert Smart TitleFailed to deploy VM {vm_name} from Template {template_name} to {cluster_name} Alert TitleVM deployment from Template to Prism Element Cluster failedAlert MessageVM from Template could not be fully deployed to the Cluster
. TroubleshootingEnsure Prism Element - Prism Central connection is healthy and retry the operation. Contact Nutanix Support at https://portal.nutanix.com if additional assistance is required. Collect additional information and attach them to the support case.
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB-2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB-2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB-6691.
nutanix@cvm$ logbay collect --aggregate=true
If the logbay command is not available (NCC versions prior to 3.7.1, AOS 5.6, 5.8), collect the NCC log bundle instead using the following command:
nutanix@cvm$ logbay collect --aggregate=true
Attaching Files to the CaseAttach the files at the bottom of the support case on the support portal.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB-1294.Requesting AssistanceIf you need further assistance from Nutanix Support, add a comment to the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the Support Team by calling one of our Global Support Phone Numbers. You can also click the Escalate button in the case and explain the urgency in the comment, and Nutanix Support will be in contact.Closing the CaseIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case and Nutanix Support will close the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Memory hot-add to a UVM running on AHV is rounded up to the next GB. ### Assistant: Hot adding memory is always rounded up to the next GB (e.g. setting VM memory to 3.1G will be rounded up to 4G). Setting UVM memory while powered off will always be set as it is without rounding up.. Hot adding memory to a user VM running on AHV will round up the added memory to the next Gb. That can lead to one of the following situations:
UVM's memory is currently set to a whole X GB, so setting it to any value with a fraction will be always rounded up to the next Gb value.
Example: The VM currently has 8Gb of memory:
nutanix@CVM:~$ acli vm.get <UVM_name> | grep -i memory
Attempting to increase the VM memory to 8392Mb:
nutanix@CVM:~$ acli vm.get <UVM_name> | grep -i memory
The process completes successfully, however, verifying the actual amount of memory shows that the memory is now at 9Gb:
nutanix@CVM:~$ acli vm.get <UVM_name> | grep -i memory
UVM's memory is currently set with a fraction of GB. Increasing the memory with any value will be rounded up to a Gb and then added to the old value.
Example: The VM currently has 8.5Gb of memory (8704Mb):
nutanix@CVM:~$ acli vm.get <UVM_name> | grep -i memory
Attempting to add 200Mb to that VM (end value 8904Mb):
nutanix@CVM:~$ acli vm.get <UVM_name> | grep -i memory
The operation completes successfully, but instead of 200Mb it adds 1Gb to the original value. So, it rounds up the added amount to the next Gb value:
nutanix@CVM:~$ acli vm.get <UVM_name> | grep -i memory
Cold adding/setting memory to a VM will always be set as entered and will not be rounded up.. To set the memory for a VM with a fraction of Gb, it is required to power off the VM and cold-add the memory.If the VM memory is already configured with a fraction of Gb, in order to set the whole Gb, the VM would need to be powered off.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about AHV VMs may be restarted unexpectedly due to memory corruption. ### Assistant: In some cases, unexpected user VM restart can be caused by memory corruption. If Frodo process, which is responsible for servicing I/O generated by user VM, tries to access corrupted memory page, it will stop working and user VM will be rebooted.. In some cases, unexpected guest VM restart is caused by memory corruption. It may happen if the Frodo process, which is responsible for servicing I/O generated by guest VM, tries to access a corrupted memory page or during VM live migration.VM was rebooted during normal operations/home/nutanix/data/logs/acropolis.out log on CVM (Controller VM) that hosts Acropolis leader will contain the following events:
2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff
/var/log/libvirt/qemu/<vm uuid>.log on the AHV host will contain the following events:
2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff
/var/log/messages on the AHV host will contain the following events:
2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff
VM was rebooted during live migrationThe migration task may fail with the following error:
2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff
/var/log/libvirt/qemu/<vm uuid>.log on the AHV host will contain the following events:
2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff
/var/log/messages on the AHV host will contain the following events:
2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff
In addition, the following logs can confirm memory corruption:The following error can be seen in the /var/log/mcelog on the AHV host:
2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff
You may notice on CVM that the value of the HardwareCorrupted counter is not equal to zero:
2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff. The hardware issue is suspected to cause this problem. Make sure that the latest recommended versions of BIOS and BMC firmware are running and engage hardware vendors to perform hardware diagnostics. If the hardware is NX hardware, consider engaging Nutanix Support at https://portal.nutanix.com/.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Nutanix Files - Steps to convert Non Distributed Share to a Distributed Share. ### Assistant: The following article provides the steps necessary to convert a share from a standard share to a distributed share.. This article applies to Nutanix Files 4.0.2 and later.
Beginning in Nutanix Files 4.0.2, a feature was introduced to support converting a file share from one type of share to another. In this initial release, the workflow supported is converting from a standard share to a distributed share.
Note: this article is intended to be walked through by Support with a customer to validate whether the share to be converted is a good candidate and the steps to kick off the share conversion.
What is supported?
From Standard to Distributed Shares (One-way conversion). CLI-based processes (start, stop, pause, resume, abort, cut-over).Max 50M files and 2000 TLDs in the share to be converted.Only a single conversion can be run at once.
Note: Converting a standard (general) share to a distributed (home) share has a known limitation where the Share level attributes are not applied on the target (distributed) share. The share attributes will need to be re-applied again at the share level i.e. at the root of the target share.
What is not supported?
Conversion via UI.Files at the root of the source standard share. Windows hidden or system files at the root of the standard share may prevent the share conversion. To see these files from the Windows client, please make sure to select "Show hidden files, folders, or drives" and uncheck "Hide protected operating system files (Recommended)" under the View tab of Windows Explorer Folder Options.Distributed shares to Standard shares (future roadmap).Shares with Smart Tiering or Smart DR policies.Multiple active share conversions (1 active conversion at a time).DoDIN APL ConfigurationsChanging the amount of storage allocated to Files is prohibited whilst a conversion process is running.
What guardrails are in place?
Share Attributes
Must manually set the following post-conversion: SSR, File Blocking, and Quotas.Antivirus applied post-conversion cutover.
File Server updates such as re-IP, Node Add, and Node delete are allowed.VG Move tasks cannot be initiated during conversion.TLDs created during conversion are limited to 2,000.Smart DR or Tiering. If enabled, the shares cannot be converted.There is, as of yet, no guardrail for storage space used by Files. Make sure there's enough free space to replicate the share you're converting. . Note:
All commands will be run on the File Server VM (FSVM) command line unless stated otherwiseIn the following examples, the non-distributed share name is test_share.
DoDIN enabled clusters only
Use this command to disable DODIN temporarily to allow share conversion to complete:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Enable DoDIN again with this command:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
If you forget to do the above on a DoDIN-enabled cluster, the conversion will never complete.The FSVM command 'afs sync.make_distributed status conversion_id=<UUID>' will continue to cycle through all of the files for conversion returning "BYTES TRANSFERRED - NONE".Steps to convert a share:
Run a pre-check on the standard share to be converted (note this can be skipped because a pre-check is done at the start of a share conversion):
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Note: The conversion process doesn't support files at the root of the source SMB share shares since post-conversion files at the root are not supported. This guardrail is part of the precheck process. You may see the following error:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Example:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Start the share conversion on the standard share to be converted:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Note: Once the share conversion process is initiated, a Share conversion operation will be initiated with the conversion id. Example:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
To check the status of the running conversion. Use the conversion ID obtained in step 2.
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Example:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Before proceeding with the cutover, you will want to make sure that the status shows Status: Active [L0 Completed]:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Initiate the cutover to the distributed share. Once this command is initiated, share connections to the standard share will be flushed. In addition, both the standard share and the new distributed share will be in No Access Mode. A final replication will take place.
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Example:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Note: The default cutover time is 30 mins to complete the final replication. If that time limit is exceeded, the original share will be made available again, or you can modify the time by changing the “cutover_timeout” option.Check the status of the cutover and look for “status: completed”:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Example:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Note: The old standard share will be renamed to <standard_share_name>_ntnx_old, and a new distributed share will be created with the original share name.
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
To apply the settings to the new distributed share that was set to the original standard share, run the following command:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
Example:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
If the clients are having file access issues post-conversion, have them remount the NFS export or the SMB share to avoid file access issues from stale file handles. In addition, they may have to clear the DFS cache on Windows clients.
If, for any reason, you need to roll back this operation, such as running out of storage space allocated to Files, you can abort the procedure using the following:
nutanix@FSVM:~$ afs infra.set_sudo_auth state=off
. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about AOS Only - What to do when /home partition or /home/nutanix directory on a Controller VM (CVM) is full. ### Assistant: This article describes ways to safely free up space if /home partition or /home/nutanix directories of a CVM become full or do not contain sufficient free space to facilitate an AOS upgrade.. Important Notes:
Do not use this KB or the cleanup script if /home partition is exceeding the limit on a Prism Central VM (PCVM). For the PCVM issue, refer to KB 5228.Login to the CVMs as nutanix userDo not treat the Nutanix CVM (Controller VM) as a normal Linux machine.Do not use rm -rf under any circumstances unless stated. It will lead to data loss scenarios.If you are running LCM-2.6 or LCM-2.6.0.1, LCM log collection fills up /home directory please refer KB 14671 for workaround.If you receive /home partition usage high alert on a cluster running NCC 4.0.0, also check KB 10530.You can review the specific clusters affected by this alert via the discoveries on the Support Portal powered by Nutanix Insights hereContact Nutanix Support if you have any doubts.
CVM /home partition or /home/nutanix directory being full can be reported in two scenarios:
The NCC health check disk_usage_check reports that the /home partition usage is above the threshold (by default, 75%).The pre-upgrade check test_nutanix_partition_space checks if all nodes have a minimum of 5.6 GB space in the /home/nutanix directory.
The following error messages are generated in Prism by the test_nutanix_partition_space pre-upgrade check:
Not enough space on /home/nutanix directory on Controller VM [ip]. Available = x GB : Expected = x GB
Not enough space on /home/nutanix directory on Controller VM [ip]. Available = x GB : Expected = x GB
Not enough space on /home/nutanix directory on Controller VM [ip]. Available = x GB : Expected = x GB
Not enough space on /home/nutanix directory on Controller VM [ip]. Available = x GB : Expected = x GB
Nutanix reserves space on the SSD tier of each CVM for its files and directories. These files and directories are in the /home folder you see when you log in to a CVM. The size of the /home folder is capped at 40 GB so that most of the space on SSD is available for user data.
Due to the limited size of the /home partition, running low on free space and triggering Prism Alerts, NCC Health Check failures or warnings, or Pre-Upgrade Check failures is possible. These guardrails exist to prevent /home from becoming full, as this causes data processing services like Stargate to become unresponsive. Clusters where multiple CVMs' /home partitions are 100% full often result in downtime of user VMs.
The Scavenger service running on each CVM is responsible for the automated clean-up of old logs in /home. Improvements to the Scavenger service scope were made in AOS 5.5.9, 5.10.1, and later releases. For customers running earlier AOS releases, or in special circumstances, it may be necessary to manually clean up files in some directories to bring the space usage of /home down to a level that will permit the AOS upgrade.
When cleaning up unused binaries and old logs on a CVM, it is important to note that all the user data partitions on each drive associated with a given node are also mounted within /home. Nutanix advises strongly against using undocumented commands like rm -rf /home since this will also wipe the user data directories mounted within this path. This article aims to guide you through identifying the files causing the CVM to have low free space and removing only those that can be deleted safely. . General Guidance
Checking the space usage in /home.
To accommodate a potential AOS upgrade, usage should be below 70%. Use the df -h command to verify the amount of free space in /home. In the example below, CVM x.x.x.12 /home usage is 81%.
nutanix@cvm$ allssh "df -h /home"
CVM /home partition information can be collected using the logbay command (NCC 4.0.0 and above, KB 6691).
nutanix@cvm$ allssh "df -h /home"
Cleaning unnecessary files under /home directory.
If you have any open cases with pending Root Cause Analysis, check with the case owner whether these log files are still required or can be discarded.
Warnings:
DO NOT treat the Nutanix CVM (Controller VM) as a normal Linux machine.DO NOT use rm -rf under any circumstances unless stated. It will lead to data loss scenarios.Contact Nutanix Support if you have any doubts.
Method 1: Using approved script
Download and run KB-1540_clean_v10.sh to clean files from approved directories. Note: This script is NOT qualified to be used on Prism Central VM.
From any CVM, run the following commands to download KB-1540_clean_v10.sh:
nutanix@cvm$ allssh "df -h /home"
Deploy the script to a local CVM or all CVMs of the cluster:
nutanix@cvm$ allssh "df -h /home"
Execute the script to clear files from approved directories.
Help
nutanix@cvm$ allssh "df -h /home"
Interactive mode
nutanix@cvm$ allssh "df -h /home"
Non-interactive mode
nutanix@cvm$ allssh "df -h /home"
Note: If the output of the script or its coloring looked incorrect, try to set the environment variable before running the script, or use "--no_color" option:
nutanix@cvm$ allssh "df -h /home"
Interactive mode
Main menu
All PlansSwitch displaying plans (all / concerned)RescanRescan usages and update tableList AllList all targetted files/directoriesRun AllRun or Dry-run all plansExport AllExport all targetted files/directories to CSV##: Plan #Go to the plan item menu
Plan item menu
RescanRescan and update tableOperationChange operation (remove/shelter/etc.)ListList targetted files/directoriesDryrunDry-run this planRunRun this planExportExport targetted files/directories to CSV
Non-interactive mode
Commands
$ ./nutanix_home_clean.py --scanScan and show the usages.$ ./nutanix_home_clean.py --list $ ./nutanix_home_clean.py --list=<##>List up the target files for all plans or specific plan-##.$ ./nutanix_home_clean.py --dryrun $ ./nutanix_home_clean.py --dryrun=<##>Dry-run all plans or specific plan-##.$ ./nutanix_home_clean.py --run $ ./nutanix_home_clean.py --run=<##>Run all plans or specific plan-##.
Additional Options
--operation=<op>Choose operation (remove, shelter)--sheletedir= <path>Set the sheltering location.--yesFor skipping disclaimer and confirmation.
Cleaning up after the troubleshooting The downloaded script files, logs and exported files are expected to be removed manually after every troubleshooting. The total size of these files should be small and will not affect CVM's filesystem. You can remove the following files once the script becomes unnecessary. <yymmdd-hhmmss> is the creation date and time.
In the CVM where the KB script is deployed (/home/nutanix/tmp/): KB-1540_clean.sh - downloaded file from the KB deploytool_yyyymmdd-hhmmss.log - deployment script's log (unnecessary after deployment) nutanix_home_clean.py - main KB script nutanix_home_clean_config.py - config file for the main scriptIn the rest of the CVMs in the cluster - if deployed to all CVM in step 2: nutanix_home_clean.py - main KB script nutanix_home_clean_config.py - config file for the main scriptEvery CVM where nutanix_home_clean.py is run: KB-1540_v10_yyyymmdd_hhmmss_nutanix_home_clean.log - KB script's log KB-1540_v10_yyyymmdd_hhmmss_export_*.csv - exported files (if exported)
The following command can remove all of the above:
nutanix@cvm$ allssh "df -h /home"
Method 2: Manual method
PLEASE READ: Only the files under the directories stated below are safe to delete. Take note of the specific guidance for removing files from each directory. Do not use any other commands or scripts to remove files. Do not use rm -rf under any circumstances.
Removing old logs and core files. Only delete the files inside the following directories and not the directories themselves.
/home/nutanix/data/cores//home/nutanix/data/binary_logs//home/nutanix/data/ncc/installer//home/nutanix/data/log_collector//home/nutanix/prism/webapps/console/downloads/NCC-logs-*
Use the following syntax for deleting files within each of these directories:
nutanix@cvm$ allssh "df -h /home"
Removing old ISOs and software binaries. Only delete the files inside the following directories and not the directories themselves.
Check the current running AOS version under "Cluster Version":
nutanix@cvm$ allssh "df -h /home"
/home/nutanix/software_uncompressed/ - The software_uncompressed folder is only in use when the pre-upgrade is running and should be removed after a successful upgrade. If you see a running cluster that is currently not upgrading, it is safe to remove everything within the software_uncompressed directory. Delete any old versions other than the version to which you are upgrading./home/nutanix/foundation/isos/ - Old ISOs of hypervisors or Phoenix./home/nutanix/foundation/tmp/ - Temporary files that can be deleted.
Use the following syntax for deleting files within each of these directories:
nutanix@cvm$ allssh "df -h /home"
/home/nutanix/software_downloads/
If the files under the software_downloads directory are not required for any planned upgrades, remove them from Prism Web Console > Settings> Upgrade Software. Also check File Server, Hypervisor, NCC, and Foundation tabs to locate the downloads you may not require. The example below illustrates two versions of AOS available for upgrade, each consumes around 5 GB. Click on the 'X' to delete the files.
Uncheck the “Enable Automatic Download” option if it is checked. Left unmonitored, the cluster will download multiple versions, unnecessarily consuming space in the home directory.
Re-check space usage in /home using df -h (see General Guidance of this article) to confirm that it is now below 70%.
Note: If you cannot delete the files with the below error and space not claimed, contact Nutanix Support for assistance.
nutanix@cvm$ allssh "df -h /home"
Important Notes for NC2 Clusters:It has been observed in some instances of NC2 clusters that /tmp gets close to full. You can follow the below steps to clean ~/tmp directory.
SSH to the affected CVM and check the disk usage by running "df -h" command:
nutanix@cvm$ allssh "df -h /home"
In the above output, we can see /tmp is showing 100%. Change the directory to ~/tmp & sort them using 'sudo du -aSxh /tmp/* | sort -h'.
nutanix@cvm$ allssh "df -h /home"
From the output you receive above, manually delete files that are bigger than 12K. For example, see below files deleted from the above output.
nutanix@cvm$ allssh "df -h /home"
After deleting, you can check available free space using 'df -h':
nutanix@cvm$ allssh "df -h /home"
As you can see, available free space now shows 6%. You can further recheck with:
nutanix@cvm$ allssh "df -h /home"
or
nutanix@cvm$ allssh "df -h /home"
Even after cleaning up files from the approved directories, /home usage is still above the threshold
Contact Nutanix Support for assistance. Under no circumstances should you remove files from any other directories aside from those recommended by this article, as these may be critical to the CVM performance or may contain user data.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about AHV node removal stuck after successfully entering maintenance mode. ### Assistant: Node removal task for AHV host can get stuck if the the host has already entered maintenance mode before the node removal task was triggered.. Node removal task for the AHV host can be stuck if the host had already entered maintenance mode before the node removal task was kicked.Below are the symptoms and the methods to verify if you are hitting the same issue:
AHV host has successfully entered maintenance mode. Connect to any CVM in the cluster and run the following command to check AHV host state:
nutanix@cvm:~$ acli host.list
Example output:
nutanix@cvm:~$ acli host.list
If the host is schedulable, then True value is shown in the Schedulable column. Otherwise, False value is shown.Run the following command to check AHV host state:
nutanix@cvm:~$ acli host.list
Example output:
nutanix@cvm:~$ acli host.list
Here we can see that host is in maintenance mode.
Node remove status for the host is MARKED_FOR_REMOVAL_BUT_NOT_DETACHABLE. Connect to any CVM in the cluster and run the following command to check the node removal status:
nutanix@cvm:~$ acli host.list
Example output:
nutanix@cvm:~$ acli host.list
Node is removed from the Cassandra ring. Connect to any CVM in the cluster and run the following command to check that the node has been removed from the Cassandra ring:
nutanix@cvm:~$ acli host.list
Here we can see that node has been removed from the Cassandra ring.
Node is present in the Prism Hardware list with the status of "Being removed".. This is a known issue that is resolved in AOS 5.15.2 and above versions. Upgrade AOS to the latest supported version to avoid this issue.If currently affected by this issue then gather the information above and engage Nutanix Support to resolve the issue.When logging a support case with Nutanix Support on this issue. Collect the log bundle using Logbay and upload it to the case directly via FTP/SFTP, or manually via the Support Portal.For more information on how to use Logbay, see KB 6691. To automatically upload the log bundle using Logbay, use the --dst (FTP/SFTP destination) and -c (support case number) options.
. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about NKE cluster scrubbing failed. ### Assistant: NKE clusters PreUpgradeChecks fail with message "NKE cluster scrubbing failed".. Trying to upgrade Kubernetes or node OS-image might fail during the PreUpgradeChecks phase with the error message:
PreUpgradeChecks: preupgrade checks failed: unable to accept the request. NKE cluster scrubbing failed, check error logs and rectify.
The failing precheck prevents the upgrade from starting.
. Prism Central hosts the Karbon_core service. This service manages NKE clusters and does a scrub operation on all registered NKE clusters when the service starts and/or is restarted. The scrub operations are performed to ensure all NKE cluster nodes are configured properly; for instance, if there are any proxy-related changes on the PE cluster hosting the NKE nodes, the Karbon_core service pushes the new proxy configuration to NKE nodes and services.Scrub operations for a specific NKE cluster might not complete properly or fail for various reasons. For example, if the NKE cluster is not healthy, the error message is logged to karbon_core.log file on PCVM. This log file should be inspected to understand why scrubbing has failed:
nutanix@PCVM:~$ less /home/nutanix/data/logs/karbon_core.out
Note: On a scale-out PC, the "karbon_core.out" log file should be inspected on the Karbon_core leader service.Execute below command to find the NKE/karbon_core service leader on a scaled-out PC deployment:
nutanix@PCVM:~$ less /home/nutanix/data/logs/karbon_core.out
For example, karbon_core service on PCVM "x.x.x.103" is the service leader:
nutanix@PCVM:~$ less /home/nutanix/data/logs/karbon_core.out
Check karbon_core.out log file to see why the scrub operation has failed for the affected NKE cluster.Most of the time, this issue is not noticed until a K8s or OS-Image upgrade is done. This could be a long time since the karbon_core service was last restarted, and the logs would have been rotated already.To resolve this issue and after inspecting karbon_core.out log file for possible causes:
Ensure the NKE cluster is reported as healthy, either in PC WebUI or using karbonctl command from PCVM: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_8:top-clusters-tab-r.html Or via karbonctl: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_8:top-logon-karbonctl-t.html
nutanix@PCVM:~$ less /home/nutanix/data/logs/karbon_core.out
Restart karbon_core service on all PCVMs; wait for 10 minutes till a new scrubbing operation is completed, then retry the upgrade task: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_8:top-service-restart-t.html
If the above does not resolve the issue, contact Nutanix Support, collect Karbon_core logs from Prism Central and attach them to the case for further analysis. See KB-12133 for instructions on how to collect karbon/NKE logs. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Which Alerts Automatically Generate a Support Case with Nutanix Support. ### Assistant: This article provides details about who Nutanix contacts when a case is generated, how to change the contact, and which alerts generate cases.. When enabled, a Nutanix cluster automatically sends alerts to Nutanix Support.Alert cases are raised with Nutanix Support for a subset of alerts that have been identified as being important, so action can be taken on them.Most of the alert cases are created with a P4 priority level, which has a target response time of 24 hours. NodeDegraded and CassandraSSTableHealth are created with a P2 priority level, which has a target response time of two hours.Nutanix has also enabled an automatic first response email for most P4 alerts. If a primary Support Contact is listed for a node, the email is sent to that contact, with Nutanix Support following up on the email within the next two business days.Nutanix sends parts to the Parts Shipment Address listed for the node in the Nutanix Support Portal. It is important to keep this address up-to-date for timely dispatch of parts and prompt resolution of hardware issues.. Who is Contacted?Alert cases are handled by Nutanix Support, who reach out to the primary Support Contact (also referred to as the break/fix contact) associated with the node or cluster in question. For customers with a third-party providing front-line support, it is normal to have a representative from that company listed as the Primary Support Contact.
To Whom Are Parts Sent?By default, parts are sent to the Parts Shipment Address listed on the node's asset details in the Nutanix Support Portal. It is important to keep this address up-to-date for timely dispatch of parts and prompt resolution of hardware issues. Out-of-date addresses may result in parts being sent to the wrong address/person, which results in delays in receiving the parts and getting your system back to a healthy state.
Changing the Contact and Parts Shipment AddressPerform the following to modify the break/fix contact in the Support Portal:
Go to Assets > BlocksSelect the asset for which you want to modify the contact and/or Parts Shipment AddressClick Actions -> Update and specify the new contact/address details
Which Alerts Generate Cases?The following alerts generate alert cases.
Notes:
The table is updated for the latest AOS and NCC releases; earlier releases do not send all of these alerts. Nutanix recommends keeping the NCC version update to the latest release. NCC can be updated independently from the AOS.Alert cases are generated only when the cluster is under an active support contract.You may need to log in to the Support Portal to view some of the KB articles below.As per FA-83, on March 1st, 2021, Nutanix has deprecated the automatic support case creation functionality via nos-alert/nos-asup email (SMTP) notifications in favour of automatic case creation via Insights Discoveries using Pulse.For performing Discovery-based case creation on Nutanix clusters, the minimum NCC requirement is 3.10.x.A1031 HomePartitionUsageHigh and A3038 FsInconsistencyCheck are temporarily disabled and do not generate support cases.
Case Generating Alert Table
NameAlert IDDescriptionTypeSFDC Case PriorityKBAlert SeverityAlert TypeMetadataDiskMountedCheckA101055Check that all metadata disks are mountedSWP4KB 4541CriticalNormalCassandraSSTableHealthA21021Health warnings detected in metadata serviceSWP2KB 7077CriticalCluster wideCuratorScanFailureA1081Curator scan has failedSWP4KB 3786CriticalCluster wideDiskSpaceUsageHighA1005Disk capacity is above 90%SWP4KB 3787CriticalCluster wideNodeDegradedA130087One or more critical processes on a node are not responding to the rest of the clusterSWP2KB 3827CriticalCluster wideStargateTemporarilyDownA1030The Stargate process on a node has been down for more than 3 hoursSWP4KB 3784CriticalCluster wideSATADomPECyclesErrorA1180The SATADOM has exceeded a wear thresholdHWP4KB 4137CriticalNormalECCErrorsLast1Day*A1187ECC errors over the last day have exceeded the one day thresholdHWP4KB 4116CriticalCluster wideECCErrorsLast10Days*A1188ECC errors over the last 10 days have exceeded the 10 day thresholdHWP4KB 4116CriticalCluster wideHardwareClockFailure*A1059Hardware Clock FailureHWP4KB 4120CriticalNormalPhysicalDiskBad*A1104A physical drive in the node has been reported as badHWP4KB 4158CriticalNormalDiskDiagnosticFailure*A1139Drive diagnostic has failed.HWP4KB 6287WarningNormalPowerSupplyDown*A1046One of the power supplies in the chassis has been reported downHWP4KB 4141CriticalCluster wideSystemTemperatureHigh*A1012A node in the cluster has an abnormally high temperatureHWP4KB 4138CriticalNormalSATADOMUnreachableA1177SATA DOM in the node cannot be reachedHWP4KB 7813CriticalNormalSATADOMFailedA1165SATA DOM in the node has failedHWP4KB 1850CriticalNormalShellvDisksAboveThresholdA1182Number of Shell vDisks in the cluster is above the thresholdSWP4KB 8559CriticalNormalIPMISELUECCCheckA15039Check for number of UECC errors for last one day in the IPMI SELHWP4KB 8885CriticalNormalLcmFamilyMismatchD3-0035Incorrect LCM family can cause upgrade failureSWP4KB 9898CriticalNormalM2BootDiskFailureA106042M.2 Boot Disk change checkHWP4KB 4874WarningNormalHwRaidCheckFailureA106053Host Hardware RAID status CheckHWP4KB 6627CriticalNormalDiskStatusCheckFailureA106061Disk status check for SATA disksHWP4KB 8094CriticalNormalDiskStatusCheckFailureA106062Disk status check for SAS disksHWP4KB 8094CriticalNormalDiskStatusCheckFailureA106063Disk status check for host boot disksHWP4KB 8094CriticalNormalDiskInodeUsageHighA1027High inode usage on this node may cause cluster performance to be significantly degraded.HWP2KB 1532CriticalCluster uuidCvmBootRaidDegradedA1107The node is operating wih a degraded boot RAID volume on the Controller VM.HWP2KB 2463CriticalCluster uuidStoragePoolSpaceUsageCheckA1128Space usage in the storage pool has exceeded the critical threshold (90% of usable space by default)SWP2KB 2475CriticalCluster uuidNvmeCheckFailureA1196The node is operating with a problematic NVMe driveHWP4KB 3751CriticalCluster uuidStargateMergedvBlockCrashD3-0075Stargate crashed due to merged vblock pointing to non-existing eidsSWP2KB 12431CriticalAlert group and cluster uuidMetadataDiskUnmountedFatalD6-004A metadata disk that is expected to be in-use is no longer mountedSWP4KB 4494CriticalAlert group and cluster uuidDiskRemovalStuckA130156External alert generated by Curator for when Disk removal process is stuck for a long timeSWP4KB 2159CriticalCluster uuidCassandraRingSkewThresholdD3-0114-ACluster performance at risk due to detected Cassandra metadata imbalance on 3 node clusterSWP4KB 1342CriticalCluster uuidCuratorExtentRegionInfoCrashD4-0015Curator has recently crashed due to an extent region having multiple vblocks pointing to the same egroupIDSWP2KB 13306CriticalCluster uuidCuratorMetadataInconsistencyFatalD4-0016Curator scans may crash due to metadata inconsistencySWP2KB 13779CriticalCluster uuidCuratorVblockMetadataInconsistencyFatalD4-0017Curator service has crashed due to inconsistent vblock reference in metadataSWP2KB 13860CriticalCluster uuidDiskOfflineD0-1044Node storage capacity potentially affected due to a drive marked offlineHWP4KB 8453CritcalCluster uuidStargateErasureCodingFTCrashD4-0018Stargate has recently crashed due to two EC strip members found on the same nodeSWP2KB 14286CriticalCluster uuid
ECC Alert Case Creation Logic
ECC Discovery Name
Platform
Alert ID
Additional Logic
Case Creation
Discovery Recommendation
KB
D0-1187-UECC
NX platforms
A15039
n/a
Enabled
Replace DIMM
KB 8885
D0-1187-Generic
NX platforms
A1187 / A1188
Alert msg could not be retrieved
Enabled
Troubleshoot DIMM alert
KB 4116
D0-1187-G4G5-BIOS
G4/G5
A1187 / A1188
BIOS < G4G5
Disabled
Upgrade BIOS and BMC to latest version. Do not replace DIMM.
KB 3357
D0-1187-G4G5-DIMM
G4/G5
A1187 / A1188
DIMM CECC error message and BIOS version >= G4G5
Enabled
Replace DIMM
KB 3357
D0-1187-G6G7-DIMM-BIOS
G6/G7
A1187 / A1188
DIMM CECC error message and BIOS version < 41.x
Enabled
Replace DIMM. Upgrade BIOS to latest version
KB 9137
D0-1187-G6G7-BIOS
G6/G7
A1187 / A1188
DIMM CECC error message and BIOS = 41.x
Disabled
Upgrade BIOS and BMC to latest version. Do not replace DIMM.
KB 9137
D0-1187-G6G7-DIMM
G6/G7
A1187 / A1188
DIMM CECC error message and BIOS >= 42.x
Enabled
Replace DIMM
KB 7503
D0-1187-hPPR-Fail
G6/G7
A15040
n/a
Enabled
Replace DIMM
KB 7503
D0-1187-hPPR
G6/G7
A1187 / A1188
DIMM RAS event error message and BIOS >= 42.x
Disabled
Reboot host for hPPR (Hardware Post Package Repair) feature
KB 9137
PSU Alert Case Creation Logic
ECC Discovery Name
Platform
Alert ID
Additional Logic
Case Creation
Discovery Recommendation
KB
D0-1046
All
A1046
n/a
NX Platforms Only
Replace PSU
KB 7386
D0-1046-ENV
All
A1046
PSU State value = 0x0 or 0xb
Disabled
Confirm the power supply is connected to its power source
KB 7386
* This is an auto-first response eligible alert. Where a primary Support Contact is configured, an initial email will be sent.
You can find more information about automatic alerts and the Pulse functionality in Alert and Event Monitoring of the Prism Web Console Guide for your AOS release.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about NCC Health Check: cvm_startup_dependency_check. ### Assistant: Controller VM (CVM) may fail to start after the host is rebooted. The NCC health check cvm_startup_dependency_check determines whether any problems regarding CVM bootup are likely to occur upon the host reboot.. If an ESXi host is restarted, the Controller VM (CVM) may fail to start. The NCC health check cvm_startup_dependency_check determines whether any problems regarding CVM bootup are likely to occur when the host is restarted.
This check runs on ESXi clusters only and verifies the presence of any startup dependencies that may result in a CVM not starting after the host reboot.
The check examines the contents of /etc/rc.local.d/local.sh (on the ESXi host) and makes sure it is intact and not missing anything that might be important to start the CVM after the host restarts.
Note: In ESXi 5.1+, the rc local script is /etc/rc.local.d/local.sh while in ESXi 5.0, it is /etc/rc.local.
The cvm_startup_dependency_check consists of the following:
PYNFS dependency checklocalcli check - checks if esxcli command is used in the rc local script. (Deprecated in NCC version 3.10.0.)vim command check - checks if "vim-cmd vmsvc/power.on" command is present in the rc local script.Autobackup check - checks if /sbin/auto-backup.sh has been run successfully.Network adapter setting check - checks if the network adapter is set to connect on power on.EOF check - checks if there is an "EOF" line at the end of the rc local script.RC local script exit statement present - checks if there is a top-level exit statement in the rc local script..dvsData directory in local datastore - checks if .dvsData directory is present on pynfs mounted local datastore and if it is persistent.Svmboot mount check - checks if svmboot.iso is present on mounted local datastore.
Based on the outcome of the above checks, the result is either PASS, INFO, FAIL or ERR.
Running the NCC CheckYou can run this check as part of the complete NCC Health Checks:
nutanix@cvm$ ncc health_checks run_all
Or you can run this check individually:
nutanix@cvm$ ncc health_checks run_all
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is not scheduled to run on an interval.This check will not generate an alert.
Sample outputFor Status: PASS
nutanix@cvm$ ncc health_checks run_all
For Status: INFO
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
For Status: FAIL
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ ncc health_checks run_all
Output messaging
Check ID106436DescriptionCheck that /sbin/auto-backup.sh has run successfullyCauses of failure/sbin/auto-backup.sh has not run successfully.ResolutionsMake sure '/bootbank/state.tgz' has a newer timestamp.ImpactCVM may not boot up after the host reboot.
Check ID106439DescriptionCheck that .dvsData directory is present on pynfs mounted local datastoreCauses of failure.dvsData directory is not persistent yet.ResolutionsCheck if .dvsData directory exists in the local datastore.ImpactCVM may not boot up after the host reboot.
Check ID106437DescriptionCheck that there is no line EOF at end of RC local scriptCauses of failureEOF statement at end of RC localResolutionsCheck that 'local.sh' does not have 'EOF'ImpactCVM may not boot up after the host reboot.
Check ID106438DescriptionCheck that top level exit statement is not present in RC local scriptCauses of failureTop-level exit statement present in script RC local preventing script lines from being run.ResolutionsCheck if 'local.sh' has an 'exit' statement. Generate INFO if the exit statement is NOT within the if..fi statementImpactCVM may not boot up after the host reboot.
Check ID103068DescriptionCheck Network adapter settingCauses of failureNetwork adapter is not set to not connect on power onResolutionsCheck if ethernet0.startConnected =true is present in the CVM's .vmx file.ImpactCVM may not boot up after the host reboot.
Check ID106431DescriptionCheck PYNFS dependencyCauses of failurePYNFS is in use and is not present.ResolutionsValidate PYNFS configuration.ImpactCVM may not boot up after the host reboot.
Check ID106433DescriptionCheck that vim-cmd vmsvc/power.on command is present in the local scriptCauses of failurevim-cmd vmsvc/power.on\ command not present in local script.ResolutionsCheck if 'vim-cmd vmsvc/power.on' entry is present in 'local.sh'.ImpactCVM may not boot up after the host reboot.
Check ID106440DescriptionChecks if CVM ISO is on mounted local datastoreCauses of failureServiceVM_Centos.iso is not on the mounted local datastore.ResolutionsCheck if ServiceVM_Centos.iso exists in the local datastore.ImpactCVM may not start after the host reboot.. Find your error message in the table below and perform the corresponding recommended actions.
Failed checkError messagesRecommended actionsPYNFS dependency check"PYNFS is in use and is not present"If pynfs is in use, make sure pynfs VIB is present and if pynfs.tar.gz is referenced in /etc/rc.local.d/local.sh, make sure it is present and readable.vim command check""vim-cmd vmsvc/power.on" command not present in local script"Make sure 'vim-cmd vmsvc/power.on' entry is present in /etc/rc.local.d/local.sh.Autobackup check"/sbin/auto-backup.sh has not run successfully"Check whether /sbin/auto-backup.sh has been run successfully by confirming that /bootbank/state.tgz is newer than /etc/rc.local.d/local.sh. The NCC check fails if /bootbank/state.tgz is not newer than /etc/rc.local.d/local.sh. Sample result for the failure scenario:
nutanix@cvm$ hostssh ls -al /bootbank/state.tgz
nutanix@cvm$ hostssh ls -al /bootbank/state.tgz
Network adapter setting check"Network adapter is set to not connect on power on"Make sure the CVM VMX file does not have the setting "ethernet0.startConnected". If it does, make sure it is set to "true".EOF check""EOF" statement at end of rc local"Make sure /etc/rc.local.d/local.sh does not have an 'EOF' line..dvsData directory in local datastore".dvsData directory is not persistent yet"If .dvsData directory exists in the local datastore, make sure it is persistent.Svmboot mount check"Failed to find ServiceVM*.vmx" "Invalid vmx configuration" "ServiceVM_Centos.iso is not mounted" "ServiceVM_Centos.iso missing in vmx configuration"
Make sure /vmfs/volumes/NTNX-local-ds*/*/ServiceVM*.vmx exists, is readable and its contents are valid.Make sure /vmfs/volumes/NTNX-local-ds-XXXXXXX-A/ServiceVM_Centos/ServiceVM_Centos.iso is present in the CVM's .vmx file and is mounted.Make sure ServiceVM_Centos.iso is checked to start connected.Make sure the local datastore name is not changed to a non-default from NTNX-local-ds*.(non-configurable vSphere components)Rename ESXi local datastore name to default format of NTNX-local-ds* from vCenter server and run the check again.Make sure references to files ide0:0.fileName = "ServiceVM_Centos.iso" and serial0.fileName="ServiceVM_Centos.0.out" are not showing a full path like ide0:0.fileName = "/vmfs/volumes/5a3a73f2-59cb5028-8a34-0cc47a9bc41e/ServiceVM_Centos/ServiceVM_Centos.iso" , only the files should appear.Make sure memsize and shed.mem.minsize are not using a capital S at the 'size' word.
Do not restart the host or trigger any rolling reboot activity like CVM Memory upgrade or AOS upgrade until the check passes or Nutanix Support confirms that a restart will not cause any problem with the CVM bootup.
In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Changing the IPMI password via ipmitool fails with the "event not found" error. ### Assistant: Changing the IPMI password via ipmitool may fail with the "event not found" error if the password contains special characters like "!".. Changing the IPMI password via ipmitool may fail with the "event not found" error if the password contains special characters like "!".
nutanix@cvm:~$ hostssh ipmitool user set password 2 abcdef!xyz. Special characters like "!" are interpreted by bash resulting in command failures.Use one of the following approaches to resolve this issue:
If using a command with hostssh, enquote the complete command:
nutanix@cvm:~$: hostssh 'ipmitool user set password 2 <password>'
When running the command directly on the AHV host, enquote the password:
nutanix@cvm:~$: hostssh 'ipmitool user set password 2 <password>'
Changing it using IPMI GUI. Example for Nutanix hardware: Configuration tab -> Users -> Choose User -> Modify.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Nutanix Move - FAQ (Frequently Asked Questions). ### Assistant: Nutanix Move is a tool used to migrate VMs to Nutanix AHV by automating the process of installing VirtIO and retaining the IP as well as the MAC address on the AHV side. This article contains Frequently Asked Questions (FAQ) on Nutanix Move.. The Nutanix Move User Guide is present on the Nutanix Support Portal, and we advise you to review the docs to understand the product workflow.
Below are Frequently Asked Questions on Nutanix Move.. What is Nutanix Move and why do we need it?
Nutanix Move is a tool used to migrate VMs to Nutanix AHV by automating the process of installing VirtIO and retaining the IP as well as the MAC address on the AHV side. We need Move to take care of all the hassle of copying the vdisks one by one and loading to AHV using Image Configuration and installing VirtIO on the machines manually and keeping a note of the IPs as well.
Movements from which all hypervisors to AHV is supported?
Move 1.x and Move 2.x supports migration from ESXi to Hyper-V.
Move 3.x supports migration from Hyper-V to AHV, AWS to AHV, and ESXi to ESXi (on Nutanix).
What are the Port requirements for Move?
Requirements for ESXi migration:
ESXi hosts should be reachable from Nutanix Move on ports TCP 443 and TCP 902.vCenter should be reachable from Nutanix Appliance on ports TCP 443.Allow ports (TCP and UDP) 2049 and 111 between the Nutanix Move network and the AHV CVM network.
Requirements for Hyper-V Migration:
WinRM-HTTPS: 5986WinRM-HTTP: 5985RDP: 3389 (only for inbound)SSH: 22
Which guest OSes are supported for Migration?
Supported guest OSes for ESXi to AHV migration: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-support-os-r.html
Supported guest OSes for ESXi to ESXi migration: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-support-guest-os-esxi-to-esxi-r.html
Supported guest OSes for AWS to AHV migration: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-supported-os-aws-r.html
Support guest OSes for AWS to ESXi migration: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-supported-os-aws-esxi-r.html
Supported guest OSes for Hyper-V to AHV migration: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v3_6:v36-supported-os-hyperv-r.html
How to Migrate Guest OSes using Data only Migration: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-perform-data-only-migration.html
What are the Default Credentials for Nutanix Move?
Default credentials for Nutanix Move GUI:
Username: nutanixPassword: nutanix/4u
Default credentials for Nutanix Move CLI:
Username: adminPassword: nutanix/4u
Note: The admin user on Move does not have all the major rights so the best way is to change the user to root using the rs command.
[admin@nutanix-move]$ rs
How to Collect log bundle on Nutanix Move?
To download the latest support bundle: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_4:v34-support-bundle-c.html#nconcept_uxw_ppz_bgb
How can I upgrade Nutanix Move?
To upgrade Nutanix Move using the 1-click method, dark site upgrade or CLI based upgrade: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v3_6:v36-upgrade-management-c.html#nconcept_byq_nnl_b3b
Does Nutanix Move support migration of Exchange Server?
Exchange should be migrated by installing newer versions of Exchange Server in parallel with existing production environments, then move user mailboxes from the old system to the new one.
Refer migration limitation: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-migration-limitations-r.html
Does Nutanix Move support migration of MySQL / Oracle database servers?
Nutanix Move supports migrations for smaller DBs with a lesser workload. But if you are looking at a heavy database VM, then we recommend performing the best practices offered by Microsoft and Oracle. Adding to this, there are Nutanix Best Practices for such workloads to run with optimized performance, hence we recommend you perform cloning of the role to a newly created VM on AHV rather than migrating the VM using Move.
You can refer to this portal document pointing to the limitations of Nutanix Move: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-migration-limitations-r.html
Best Practices Guide for Oracle on AHV: https://portal.nutanix.com/#/page/solutions/details?targetId=BP-2073-Oracle-on-AHV:BP-2073-Oracle-on-AHV
Best Practices Guide for MySQL server on Nutanix: https://portal.nutanix.com/#/page/solutions/details?targetId=BP-2056-MySQL-on-Nutanix:BP-2056-MySQL-on-Nutanix
Can the VMs be migrated if they are powered off?
Yes, the VMs can be migrated if they are powered OFF. However, they will be Powered ON to install the necessary drivers if the “Automatic Preparation Mode” is selected. If you choose "Manual Preparation Mode", then you will need to copy the necessary scripts to the VMs and run them manually regardless of their power state. Either case, there will be a minimized downtime when you click "Cutover" which will power off the VMs on the source and power them on on the target.
What are the supported hypervisors we can migrate from?
Move supports migration from:
ESXi to AHVHyper-V to AHVAWS to AHVESXi to ESXi (on Nutanix)
Supported Versions are listed here
How can I change the IP address of my Move VM?
To assign a static IP address to Move, check this link
How many VMs can be migrated from Move in one go?
There is no limit on the number of VMs that can be migrated in one go. However, the default limit for running parallel migrations is 32. The rest would be in the “Queued for Migration” state.
Are there logs created on VM by Move?
Yes, there are logs injected on the VMs when Move starts injecting VirtIO and IP retention scripts.
On Windows, it can be found in:
[admin@nutanix-move]$ rs
On Linux, it can be found in:
[admin@nutanix-move]$ rs
What are the common user permissions needed on the Windows side if using a Domain administrator or local Administrator?
Check 3 things for any user on Windows VM to be qualified for Move to use:
Users must have "Logon as Batch Job" enabled.UAC must be disabled (for the local administrator, it is usually disabled)Users must have "Restore files and directories" rights.
What are the common user permissions needed on the Linux side if using root user?
Below are the configuration settings to look out for when migrating a Linux VM using credentials.
Ensure ssh localhost using only password authentication, i.e. set “PasswordAuthentication yes” in sshd_configEnsure using sudo, user assume root privileges without entering a passwordEnsure read, write and execute permissions for /tmp and /opt and /boot foldersroot access, sshd service running
Why do we need to install VirtIO to the machines when performing the migration to AHV?
Nutanix VirtIO needs to be installed on the VMs before migrating to AHV as its functionality is similar to VMware Tools in ESXi. VirtIO helps in detecting the disks when booting up with SCSI controllers on an AHV cluster. You might run into issues post-migration where VMs fail to boot up (Windows machines might run into "Boot device not found" and Linux machines might show "boot partition was not detected").
Make sure you enter the correct credentials to the VM so that Move can take care of the installation of VirtIO OR you can install the VirtIO drivers manually, which are available from the Nutanix Portal in the AHV / VirtIO download section.
To manually install VirtIO on a Linux VM, follow the article Migrating Linux VMs to Acropolis.
If the VM has VirtIO installed on the VM, check if the correct boot device is selected on Prism for the VM to boot.
What is the maximum waiting time for a VM to be left in Ready to Cutover state?
We recommend performing the cutover within 1 week post the initial Data Seeding completion of the VM because Nutanix tests the operation at those conditions. You can refer to the document here for further reference.
Note that the cutover operation needs user invention as the VM will be powered off on the source end and powered up on the target cluster. So, make sure that no users are impacted during this operation and after a green signal from all the concerned parties for the VM, only then proceed to select the VM and hit the "Cutover" button on the UI.
How is migration impacted if my source and target cluster is located in physically different locations?
With high-speed Internet and no bottlenecks, the migration will run smoothly over WAN. Keep a close check if the migration is being performed during production hours so that there are no performance impacts.
When creating a new Move, we see multiple IPs on Prism UI?
If you have a DHCP server configured on the cluster when Move VM is powered on, DHCP will assign a temporary IP to the server. This only pops up on the UI (Prism behavior). You can ignore it and proceed to add static IP to the Move VM. Note that it will take some time for the DHCP server IP to vanish from the UI. Make sure that the Move VM is reachable over the static IP and the UI can load over the static IP configured (http://<static_move_ip>).
I have a DHCP server in my environment but the Move VM migration is failing / Move VM gets an APIPA IP. How can I overcome this?
Make sure you add the Move VMs MAC address as "trusted" on your DHCP server. You can also check if the Move VM is being assigned new IP every minute/hour and make the appropriate change to lease the IP for VM. To avoid running into such conditions, you can configure a static IP for the VM.
If I have a UEFI enabled VM on the source cluster, will it be supported for migration using Move?
Nutanix Move supports UEFI enabled VM migration from Move 3.3 and AOS 5.11 onwards. However, it is yet to support Secured Boot on UEFI.
I see VMware Tools installed on the VM post-migration. Should I delete it?
Yes, you can proceed to delete VMware Tools post-migration from the AHV cluster. Note that if the migration was from a non-Nutanix ESXi cluster to a Nutanix ESXi cluster, the VMs might BSOD due to VMware Tools being outdated. You can re-install VMware Tools to the VM so that the disks are available on the OS level.
Credentials for source/target cluster changed during the migration. Now, migration is failing due to incorrect username/password. What should be done?
Below are workarounds that can be followed:
Enter the new password on the target/source cluster by hovering on the 3 dots -> Edit -> enter username/password. Restart all the services on the Move VM (this will mark all the ongoing migrations as failed so be careful when rebooting the VM)Change the password back to the old password and restart all the services on Move (this will mark all the ongoing migrations as failed so be careful when rebooting the VM).
Nutanix Engineering is aware of this issue and is working on improving the workflow when there is a change in the passwords.
Does Move Support Special Characters in VM Passwords?
Move does support using special characters in the VM passwords, all the special characters are supported. However, Language specific special characters used in password are not supported on Move.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
### Human: Can you tell about Connecting the network adapter using ESX command line. ### Assistant: Fixing Virtual Machine network issue from command line.. If a UVM or CVM is powered on but not available on the network via ping or SSH, the network adapter might not be connected.. Use the following commands on the Node to determine if the network adapter is not connected and re-connect if needed.
List all VMs on the Node. Note the VMID of the VM you are trying to reach (in this example VMID 8):
root@esx# vim-cmd vmsvc/getallvms
root@esx# vim-cmd vmsvc/getallvms
Identify the device information. Verify that the network is connected. In this case, it is not.
root@esx# vim-cmd vmsvc/getallvms
Find the DEVICEID for the NIC card that is connected to the local Network. In this case the DEVICEID is 4000
root@esx# vim-cmd vmsvc/getallvms
From the above command, we can verify that the vNIC is connected = false. This needs to be fixed, as follows.
root@esx# vim-cmd vmsvc/getallvms
where Z is true (1; enable) or false (0; disable
For example: Enable the network:
root@esx# vim-cmd vmsvc/getallvms
Disable the network:
root@esx# vim-cmd vmsvc/getallvms
. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know. |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 51