Un-handled Exception with ESXi UI

Following two problems are seen with ESXi version 6.7 Update 2 (Build 13006603).

  • Using host UI (https://HOSTNameORIP/ui ) when you click on Storage then Adapter then  following unhandled exception comes. This stop configuring storage related activity from ESXi.
Unhandled exception
Unfortunately, we hit an error that we weren't expecting.
The client may continue working, but at this point,
we recommend refreshing your browser and submitting a bug report.
Press the Esc key to hide this dialog and continue without refreshing


Clicking on Details shows following lines.

Cause: Possibly unhandled rejection: {}
Version: 1.33.3
Build: 12923304
ESXi: 6.7.0
Browser: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0
Exception stack:


  • Using host UI when you import virtual machine using OVF template following stack is seen.
TypeError: Cannot read property 'keyValue' of undefined
    at updateSummaryPortlet (
    at $scope.wizardOptions.onFinish (
    at m.$digest (
    at e (


VMkernel.log shows following lines.

2019-08-08T19:03:40.585Z cpu32:2099684 opID=27da76af)World: 11943: VC opID esxui-4004-bd50 maps to vmkernel opID 27da76af
2019-08-08T19:03:40.585Z cpu32:2099684 opID=27da76af)NVDManagement: 1461: No nvdimms found on the system


Clicking on Reload option log out ESXi web UI and you have to login to UI again.

This particular problem is fixed in ESXi build number 13981272 (ESXi 6.7 EP 10) OR later version. Refer KB  to know more about ESXi build version.
ESXi Patch download link
Select ESXi and version 6.7 then search to get latest patch.

The workaround is to press escape after opening the stack OR to import OVF using vCenter UI if host is part of vCenter, same for changing storage settings. 

Unable to install ESXi on RAID 5 logical volume

Following error may be seen during installation of ESXi on HP ProLiant when RAID 5 disk volume is configured. However if RAID 0 is configured then installation succeed.


VM1aare ESXi 6.5.0 Installer  failed.
This program has encountered an error :
Error (see log for more info )
"partedUtil failed with message "Function not implemented during read on /dev/desk/naa.... Error: The primary GPT table states that the backup GPT is located beyond the end of the disk"


HP team should be contacted for this error. This is the issue with disk issue in one of the RAID 5 configuration.

#esxi, #installation, #partdutil

IBM Cloud Object Storage (COS) configuration for API access

Following are the steps can be useful if you have any application (service) in your on premise that need to access (download/upload) files to IBM Cloud Object Storage (COS).

Open IBM COS URL https://console.bluemix.net/catalog/
‘Sign up’ for free access then ‘Login’. Select Object Storage from Storage container.



Give appropriate name to Service name then click on Create.



Once service ready then either click on Create bucket OR create your first bucket as highlighted below.




Give appropriate Name to bucket, select appropriate option for Resiliency, location, storage class (standard, vault, cold vault, flex), then click on Create.



Select Endpoints copy the endpoint name then will be used by API to access.



From Service Credential page, click on ‘New credential’, give appropriate Name to credential, if you already created service id then click Select Service ID, if not then Create New Service ID, If you creating it first time then I would create a new service id (don’t click on add as of now).



Give appropriate name to New Service ID Name, in Add lnline… option paste {“HMAC”:true} it will generate Access Key ID & Secret key id for API access.  Now click on Add.



Now click on View Credential then note down access & secret key.



Use Endpoint, access key, secret key to connect with bucket using CyberDuck, CloudBerry, S3Browser and any other equivalent tool.



You can create bucket, folders, upload & download objects using this tool to confirm the storage configuration.


Configuring Dell EMC Elastic Cloud Storage (ECS) for API access

This document is designed for test environment use only. Actual steps for production use might differ. Register ECS Test Drive account  and complete the registration process. Once you have registered for you ECS Test Drive account, log in and click the credentials link at the top left of the page.



Once you receive credential. Try one of the following tool to create S3 bucket.

Method 1 Using CyberDuck
Download and install CyberDuck
Click on Open connection, In the dialog window, choose S3 (Amazon Simple Storage Service) from the first drop-down list.



As per ECS credential page,

Change server name = object.ecstestdrive.com
Access Key ID = 131693042649205091@ecstestdrive.emc.com
Password = fbHIum2QY3A5xSr7Vlx63S+USGw3O1ULsHS9jmom

Then click on Connect



Click on blank area, click on ‘New Folder’, give name to bucket (eg. ‘storage-ecs-part1’). This name should be lower and should be unique.

bucket create.jpg


Method 2 using CloudBerry
Install Cloudberry (freeware) in your storage OR test server.
Select File-New S3 compatible account-S3 compatible



Display name could be anything you wish, supply service point, access key, secret key same as supplied in method 1. The test connection must be successful.



Create a new bucket.


Following two method can also be used to test bucket access
Another GUI tool called S3Browser
EMC ECS CLI (it require registration with EMC)



Configuration of Google cloud storage for application access

Following are the steps can be useful if you have any application (service) in your on premise that need to access (download/upload) files to Google cloud storage.

Open https://cloud.google.com/  and configure your free account if not done so far.
Open console and create a new Project.



Select the correct project if you have access to multiple projects.

Select Project.jpg

Select storage from ‘Product and services’ menu.



Click on ‘Create Bucket’ and give name, storage class and location.




Upload some of files/folders manually.



Click on Settings then Interoperability, note down cloud storage URL, access key and secret key. If secret key isn’t present then click on ‘Create a new Key



Testing bucket access:-

Install Cloudberry (freeware) for Google cloud in your test server.
Connect to Azure blob using access key, secret key. Make sure authentication is ‘Access & secret key’ is selected.



Test copy (/cut) paste of file (/s) using cloud berry console.




Configuring Rackspace Cloud Files Storage for API access

Following are the steps can be useful if you have any application (service/API) in your environment that need to access (download/upload) files to Rackspace cloud file storage.

Signup to Rackspace cloud
Go to Rackspace control panel then provide root user a/c you configured during signup process.



Create a New User account for API access. Go to User Management from Account tab



Click on Create user, this console will give you list of all users created so far. Once default (root) user will be available by default.


Give user details such as FirstName, LastName, email, phone….Etc. Contact type must be Technical then select appropriate permission on Rackspace cloud.




Once user created successfully, go to the properties of user a/c copy the Rackspace API key.




From the control panel, select Rackspace cloud from the Product list then select Files from Storage list.



Create a New container, select appropriate Region. keep the type as Private



Manually upload some of the files using console itself.



Testing bucket (container) access

Download Rackspace Command Line Interface
Go to the directory where you downloaded the rack binary and run the following command to connect with container.

rack.exe configure


Retrieves a list of containers.

Rack files container list


Lists all of the objects contained in a specified container.

Rack files object list --container StorageAccess


Uploads an object directory into a specified container

Rack files object upload-dir --container StorageAccess --dir \temp\pictures


you can also try checking cloud access using cloud berry backup tool


Configuration of Azure Blob storage for application access

Following are the steps can be useful if you have any application (service) in your on premise that need to access (download/upload) files to Microsoft Azure blob storage.

Open Azure web portal Configure free tire account if not configured.

Create a blob storage resource.



Give appropriate name for Storage account, remember this should be unique in AWS infrastructure, and select storage kind as ‘Blob storage’, select ‘New’ in ‘Resource group’ section. If you have already created any resource group then select one of old one. Select ‘Pin to dashboard’ so you can access it directly from dashboard.



From the dash board, click on storage account ‘appazurestorage1’ you just created.



Create a new ‘Container’. It’s kind of folder, in AWS & Google cloud it’s called as a bucket. Keep the access level as private.



Click on ‘Access key’ tab, note down storage account name & key.



The basic configuration of blob storage is ready. You can now upload few files manually using Azure portal by click on bucket you just created.



Testing bucket access:-

Method 1 using cloud Berry
a. Download and Install Cloudberry (freeware) for Azure in your on premise server.
b. Connect to Azure blob using access key & account name.



c. Test copy (/cut) paste of file (/s) using cloud berry console.



Method 2 Using MS AzCopy command line tool.
Download and install AzCopy on server. Open command prompt, switch to AzCopy directory which is most likely “C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy“

Download all blobs (files) of a container to local disk.
Sample Syntax

AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer /Dest:C:\myfolder /SourceKey:key /S


AzCopy /Source:https://appazurestorage1.blob.core.windows.net/myappbucket /Dest:C:\temp\Azure /SourceKey:pQrvBr+rwoI9psWKx73SKcrE8M0JW+ZUQeIY05CJ+PJMGSFMpXV+U9Maygbtiwtc69+aPkabmZna6hxfhuw2NA== /S /sourceType:blob




Upload all blobs (files) in a container (bucket)

AzCopy /Source:C:\myfolder /Dest:https://myaccount.blob.core.windows.net/mycontainer /DestKey:key /S


AzCopy /Source:C:\temp\Azure\upload /Dest:https://appazurestorage1.blob.core.windows.net/myappbucket /DestKey:pQrvBr+rwoI9psWKx73SKcrE8M0JW+ZUQeIY05CJ+PJMGSFMpXV+U9Maygbtiwtc69+aPkabmZna6hxfhuw2NA== /S



More details about AzCopy tool.

Configuration of AWS S3 (Simple Storage Service) for application access.

Following are the steps can be useful if you have any application (service) in your on premise that need to access (download/upload) files to AWS S3 storage.

Sign in with root a/c credential to AWS Console


Select IAM under ‘Security, identity and compliance’ container


Add a new user for API or Console access.


Give appropriate ‘User name’, Select Access Type.

Please note, selection of both access type isn’t recommended for production use due to accessibly issues. This demo require only ‘Programmatic acces. You can use same user account for delegation of AWS storage related stuff managed via AWS console.


Select ‘Attach existing policies directly’ then search for S3, attach ‘AmazonS3FullAccess’


Review the setting and then click on ‘Create user

Note down user name, access Key ID, Secret Access Key and Sign-in URL. You can additionally download CSV file for all these information.


Select ‘S3’ from ‘Storage’ section.



Click on ‘Create Bucket’, give appropriate name and select ‘Region’, the bucket name should be unique in AWS infrastructure.  Then click on ‘Create’. I have skipped remaining criteria such as version, permission and website related stuff for this test. However if you need to have specific settings please refer.


You can upload files manually using AWS Console.



Testing bucket access using on premise application.

Method 1-using CloudBerry

Install Cloudberry (freeware), Connect to AWS S3 bucket


You can copy (/cut)-paste files from local machine to S3 OR vice versa.



Method 2 Using-Powershell

Install AWS tools Open PowerShell, use following commands to test bucket access.

Set Credential.

Set-AWSCredentials -AccessKey AKIAI3ZDRI4HGSD4NOGQ -SecretKey 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

Get-all buckets



Show all contents of specified bucket.

Get-S3Object -BucketName appdatatest1 -MaxKey 100 |Format-Table


Refer  for more details on PowerShell commands AWS.

RAID concepts and configuration in simple words for Windows Admin.

Continue reading

#raid, #sql, #storage, #windows