Last time around I posted on the issue of native VHD data integrity considerations. Native VHD is a very fast to deploy imaging solution. However because having your whole hard drive in a single file increases your vulnerability to disk corruption which could make it much more difficult to recover data in case of bad sectors, I do not recommend Native VHD for computers on which the user will store all their own data, unless that data is stored on a different disk partition.
There are a couple of scenarios for addressing this:
-
Move the user’s profile storage onto a separate disk partition. Telling Windows to use a different drive for the Users folder should actually be possible by using the MKLINK command, which creates what is known as a Directory Junction. There is plenty of documentation on how to do this on the web. However my primary concern is whether this scenario is officially supported by MS. There are some scenarios where moving directories that are expected to be put into C drive can break the installation of service packs or updates. So this might not be the best option, unless you have tested it and are completely sure it works.
-
The second scenario is simply to do a V2P, converting the VHD to a WIM file for final deployment. At the moment this is what I am doing for a trial. We will partition the disk just as we do with VHD boot, but instead of copying the VHD to a partition and setting it up there as the boot drive, the WIM file will be applied with ImageX to a partition and BCDBoot will, instead, have to be invoked to cause the computer to boot from this different partition.
The scenario for a V2P is pretty easy to get started with. Simply mount a VHD to a physical computer. In our case I have a virtual server that has the image VHDs stored as files within its own virtual hard disk. Without shutting down that virtual server I can go into Disk Management and attach a VHD, which makes it accessible through a drive letter. I can then start up a deployment command prompt as an administrator, and run this command line:
imagex /capture source_path destination_image “image name” /compress none /verify
source_path is the directory path to be imaged e.g. C: destination_image is the path and filename of the WIM file to be created image_name is a text string that gets saved in the WIM file to say what it is /compress is an optional switch to specify compression. Turning it off will speed things up /verify is for some sort of integrity checking Note that the /capture switch requires all three of the parameters specified in italics above.
This got me a WIM file in about 38 minutes which is quite reasonable for about the same number of GB. The actual WIM file itself is only 21 GB which is interesting considering compression was turned off. Windows automatically excludes a few paths but it does bring up the idea that the VHD file could be compacted, but I can’t be bothered doing this. Also WIMs support Single Instance Storage which is probably not operating in VHDs, this could also reduce storage.
I then booted up the target in PE and performed the steps similar to VHD imaging, running a script through Diskpart to create the partitions the same way, then rebooted to get the proper drive letters assigned to the disks. Back in PE, I had to copy ImageX to my network deployment file share as it is not included in a standard Windows PE boot image. It is deployed in WAIK and since my virtual server mentioned above is also a deployment server (it has WAIK installed on it) I logged onto that and copied ImageX to the deployment share. I then ran ImageX to apply the image to the target with this command line:
imagex /apply image_file image_number dest_path
image_file is the WIM file that contains the image image_number is the number of the image in the WIM file (since WIMs can store multiple images). In this case 1 dest_path is where to apply the image to – in this case E:
The apply process looked like it was only going to take about 15 minutes so I found something else to do while it was working. This is quite quick, as every other time I used ImageX it seemed to take a lot longer. Maybe I was trying to do backup captures with a lot more data. In this case, incidentally, the image is stored on a network share, so ImageX is doing pretty well considering it has to download it over the network connection, although to be fair the server is physically about 2 metres away from the target. There is a bit more distance in copper and 3 switches involved but all of those are in the same building and running at gigabit speed. As it happened the image was completely applied in 16 minutes which is a pretty good achievement.
Once you have finished running ImageX the next step is to run bcdboot in order to designate your deployment partition as the one that gets booted up. This comes about because Windows 7 (and Vista) use a separate system partition to boot the computer. The boot partition then starts the operating system from its own partition (the one the image got loaded onto). The command here is pretty simple:
bcdboot windows_path /s system_path
where windows_path is the Windows directory in the OS partition and system_path is the drive letter of the system partition.
We already used this command for our native VHD deployments in a command script so it looks much the same.
Once this is complete then try rebooting to see if Windows will start up. I found that indeed it booted up as expected. If your VHDs have been built already on that target platform then it is likely they will have all of the required drivers already incorporated. So you are unlikely to run into driver installation problems. As expected Windows has set the partition to C drive (which will occur irrespective of the drive letters that appear in Windows PE; in this case E:)
Therefore the likely scenario for us for laptop deployment is to convert the final deployment image into a WIM and deploy it using a modified version of our Native VHD deployment scenario. We will therefore keep the use of Native VHDs to two scenarios:
-
Directly imaging platforms where user data is not saved to the boot drive (such as student computers or other networked desktops)
-
Testing our laptop deployments only – the actual deployment will be physical.
I now have to decide whether to do a V2P on those laptops I sent out. Since we already partitioned the disks that would be a relatively simple ImageX step but I would have to make a backup of the VHD first. Both however are relatively straightforward, the main issue is how big their VHD now is since we copied all of their data into it.
V2P added to VHD image development does require that extra step so we hope that MS will develop a version of ImageX that works directly with VHD files – at the moment ImageX only works with WIM files. This would eliminate the VHD to WIM conversion step and therefore save more time as well as the need to perform that conversion each time the VHD gets changed, and the extra space needed to store the WIM files.
The overall lesson is not to be too bleeding edge, and to read all the documentation. If you were backing up your VHD regularly then there wouldn’t be a problem. But people don’t do this. Microsoft really does talk about Native VHD being a test scenario. I haven’t seen them support it as a production scenario. It is mostly something that lets you service VHDs without special servicing tools (in other words, in scenarios where you can’t service them except by booting up) and it is only supported on Enterprise and Ultimate editions of Windows 7. We will continue to use it for both virtual and physical deployments as a useful image development system, but the actual deployment will be physical where necessary.