60% of organisations report reduced costs after a well-planned platform move. That scale of impact matters—so we take a methodical approach that avoids surprises.
We help businesses plan and execute a clear migration strategy: scope, pilot, execution and validation. Our team uses native Proxmox tools first—GUI import wizard and CLI utilities—then falls back to manual methods only when needed.
Proxmox VE runs on Debian with a customised kernel and offers GUI, CLI and REST API control. We check version alignment, repository readiness and storage types before work starts—local, LVM, NFS or SAN.
Target VM configs follow best practice: VirtIO SCSI single, discard enabled, IO threads, VirtIO NICs, ballooning and the QEMU guest agent. We document VMID directories, disk and file locations, network interface mappings and rollback steps so every server and host is accounted for.
Key Takeaways
- We deliver a proven end-to-end plan for low-risk migration.
- Native Proxmox import tools minimise extra software and licences.
- We validate version, storage and network readiness before execution.
- Hardware profiles and VMID paths are defined for predictable outcomes.
- Clear timelines, rollback plans and stakeholder communication reduce downtime.
- For integrated infrastructure and support, see our hyper-converge infrastructure options.
Understanding the move: why Australian teams are shifting from VMware to Proxmox
Many IT groups in Australia are re‑evaluating their hypervisor footprint for cost and flexibility reasons. We see teams choosing an open, unified platform that reduces licence overhead and gives administrators greater control over configuration and hardware choices.
Proxmox VE is fully open‑source (AGPLv3) with optional subscriptions for enterprise repository access and support. Clustering uses Corosync for multi‑master resilience and vmbr interfaces as Linux bridges; bonds provide link aggregation for higher throughput.
Storage matters: the system supports file‑level options (NFS, CIFS, directory with qcow2) and block back ends (ZFS, thin LVM, Ceph RBD). Storage choice affects performance, protection and how HA behaves.
“We prioritise predictable change windows and clear rollback steps so users know when services pause and what to expect after cutover.”
What to expect from the migration process and downtime planning
- Prepare configuration baselines and align storage and network designs.
- Select an import method—GUI import tools or manual conversion—and schedule low‑risk windows.
- Validate device and driver mapping (VirtIO NIC, VirtIO SCSI single) so the virtual machine boots cleanly.
For practical guidance and local support, see our Proxmox services.
Pre-migration checklist: network, storage and VM readiness
Before any transfer work begins, we run a focused readiness check that covers network, storage and VM configuration.
Backups, snapshots and data integrity: before you touch a disk
We confirm recent backups exist and have been verified. Data integrity takes priority over speed.
When snapshots are impractical, we use Proxmox Backup Server for protection and fast restores.
ESXi host access, SSH and connectivity to your Proxmox server
Enable SSH on the esxi host (Host > Manage > Services > TSM‑SSH > Start) and verify reachability from your node.
We test latency, throughput and ACLs so SCP transfers of .vmdk and -flat.vmdk complete without interruption.
Disable encryption, vTPM and ensure VMs are powered off
Disable disk encryption and remove vTPM on the source virtual machine before export. Always power the VM down cleanly.
Record NIC settings and plan temporary DHCP to avoid IP conflicts
- We document network settings — IP, gateway, DNS and VLAN — and plan temporary DHCP for first‑boot NIC changes.
- Identify datastore paths and exact file names for every file and vmdk to avoid mismatches during import.
- List attached devices (ISOs, passthrough controllers) and detach anything unnecessary to simplify the target configuration.
- Follow a clear order of operations: backup, shutdown, SSH enablement, copy, import/attach, configuration, then first boot validation.
- Note: for Windows guests ensure VirtIO drivers are available during or after the move to reduce troubleshooting time.
“We sequence every step so teams know the order and can predict service windows — clear records support audits and repeatable success.”
Target VM best practice on Proxmox VE
A consistent target configuration improves uptime and simplifies lifecycle management for virtual machines.
CPU, memory and ballooning device recommendations
We pick CPU types that balance performance and mobility. Use host type for identical nodes or x86-64-vX for mixed machines.
Enable the Ballooning Device — it gives memory telemetry and flexibility for the system without forcing dynamic resizing. This device helps in capacity planning and monitoring.
Disks: VirtIO SCSI single, discard, IO threads and bus selection
Configure disks on a VirtIO SCSI single controller. Turn on discard for thin provisioning and enable IO threads to cut queue contention.
Prefer SCSI for performance and features. Use SATA or IDE only for compatibility when drivers block initial boot.
Networking: VirtIO NICs, bridges and VLANs for Australian campus/branch setups
Choose VirtIO NICs for low overhead and map them to vmbr bridges tied to production VLANs. This suits campus and branch topologies and site interconnects.
QEMU guest agent and VirtIO drivers for Windows and Linux
Install the QEMU guest agent for richer telemetry and lifecycle control. Ensure VirtIO drivers are present — most modern Linux distros include them; Windows requires driver installation during or after first boot.
| Option | Recommended Setting | Why | Notes |
|---|---|---|---|
| CPU | x86-64-vX / host | Balance performance and live-migration capability | Choose host on uniform hardware |
| Disk | VirtIO SCSI single, discard, IO threads | Better throughput and thin-provision support | Confirm image file format on destination storage |
| Network | VirtIO NIC → vmbr → VLAN | Lowest overhead and clear mapping | Match VLANs to campus and DC networks |
| Firmware | SeaBIOS or OVMF | Avoid boot mismatches | Align with source BIOS for smooth boot |
We document hardware profiles for database, file and application tiers. This keeps machines consistent and auditable and ties storage class to workload I/O patterns.
Migrate VMware to Proxmox using the integrated Import Wizard
Our Import Wizard simplifies the transfer of VMs by walking you through a web-based workflow. It checks prerequisites and proposes a target configuration so fewer surprises appear at first boot.
Requirements and repository setup
Ensure Proxmox VE is version 8+ and add the test/production repositories under Updates > Repositories. Install the pve-esxi-import-tools package, then upgrade and reboot so the web import interface appears.
Adding ESXi storage and running the import
In Datacenter > Storage > Add > ESXi, supply the esxi host IP, username and password. The wizard lists available VMs and files. Select the source .vmx, pick disks and the target storage, and map networks to vmbr bridges.
Options, advanced controls and Live-Import
Advanced options let you move individual disks, change NIC models and attach ISO media. Live-Import is an option that starts the VM after enough data copies — it is not live migration. We recommend an offline import for data consistency unless change rates and user tolerance permit otherwise.
| Step | Action | Why |
|---|---|---|
| Prereq | Update repos & install import tools | Expose the web-based import interface |
| Storage | Add ESXi storage entry with root user | Enumerate VMs and datastores for selection |
| Import | Select .vmx, map disks and networks | Ensure each disk lands on intended storage tier |
| Validation | Review proposed configuration and logs | Catch disk order, firmware and NIC mismatches |
“We validate the wizard’s suggestions and keep a clear audit trail for each import.”
Manual migration path: from VMDK on ESXi to a working VM on Proxmox
For complex servers, a stepwise manual approach ensures every vmdk and descriptor file moves intact. We follow a checklist so the process is repeatable and auditable.
Enable SSH and copy source files
Stop the source VM and enable SSH (TSM‑SSH) on the esxi host. Locate the datastore path — for example /vmfs/volumes/datastore/VMName/ — and confirm both the .vmdk and -flat.vmdk are present.
Transfer and convert the disk
Copy both files to the Proxmox datastore directory via scp or WinSCP. Choose conversion:
- qemu-img convert -O qcow2 for thin, space‑efficient virtual disk files.
- qm importdisk <VMID> source.vmdk <storage> to import directly into Proxmox (raw or qcow2 with -format).
Attach, set boot order and validate
Create the destination VM shell (reserve the VMID), attach the imported drive to a VirtIO SCSI single controller when drivers exist, or use SATA for first‑boot compatibility.
- Set BIOS or OVMF to match the source — avoid TPM during this move.
- Configure boot order so the imported drive is first.
- Record checksums and command transcripts, then power on and observe first boot.
“We keep changes small and predictable — this reduces surprises and speeds troubleshooting.”
Storage strategy on Proxmox VE: local, shared and performance trade-offs
Deciding the right storage architecture is the single biggest lever for predictable performance and cost.
We split choices into file‑level and block‑level designs. File back ends (Directory, NFS, CIFS) store the virtual disk as qcow2 files. qcow2 gives snapshot and space‑saving features on file stores.
File versus block storage
Block back ends (ZFS, Ceph RBD, thin LVM) present raw devices and rely on the backend for snapshots. Use raw for top throughput where the backend offers snapshot mechanics.
| Type | Good for | Notes |
|---|---|---|
| File (qcow2) | Flexible, easy backups | Snapshots at file level; not for containers |
| Block (ZFS/Ceph) | HA, performance | Backend snapshots, raw preferred |
| LVM on SAN | Traditional SAN/NAS | Consider LVM volume‑chain in VE 9.0 |
Snapshots and backup alternatives
Proxmox Backup Server offers deduplicated backups and live‑restore when snapshots are limited. We recommend it where qcow2 snapshots or container snapshots are unsuitable.
SAN/NAS and multipath
For iSCSI/FC fabrics we plan multipath with redundant links, validate failover and tune queue depths. We document devices and map them to hosts so future growth and moves follow clear methods.
“Design storage classes by workload — balance cost, skills and SLAs for Australian data centres.”
Networking on Proxmox: bridges, bonds and VLANs after migration
Network design is the invisible foundation that determines how reliably your virtual machines talk to users and storage.
vmbr interfaces act as virtual switches on Proxmox — they map directly to physical ports or bonds. We map ESXi port groups to vmbr bridges and apply VLAN tags so each workload lands on its production, management or backup segment.
vmbr interfaces, bonds (LAGs) and SDN basics
We design bonds for throughput and resilience, then anchor bridges to those bonds. Linux bridge parity covers most needs — Open vSwitch is rarely required.
Choosing NIC models and mapping ESXi networks
We prefer VirtIO NICs for low overhead and performance. For legacy machines we pick compatible models so first‑boot issues are minimised.
- Adopt per-guest VLAN tagging or bridge-port sub-interfaces to reduce mis‑tagging.
- Isolate Corosync from backup and storage flows to protect HA behaviour.
- Validate MTU, LACP hashing and STP across the stack before cutover.
- Document interface names, bridge roles and change windows for gateway or firewall updates.
“We monitor early — flow and error counters catch cabling or template drift before users notice.”
Post-migration validation and optimisation
A focused post‑cutover review makes sure each virtual machine operates as expected. We run quick checks that confirm firmware mode, boot order and device mapping.
First boot checks: boot order, device drivers, guest tools
On first boot we verify the boot order and that the OS sees the expected controllers and virtual disk volumes.
- Checklist: firmware mode, boot order, attached devices and visible disks.
- Confirm Windows and Linux detect storage controllers and network interfaces.
- Validate application data and basic I/O under light load.
Install VirtIO and QEMU agent, then switch controllers
Install VirtIO drivers from the virtio-win ISO for Windows and enable the QEMU guest agent for lifecycle integration.
Once drivers are active we switch disk controllers to VirtIO SCSI single and NICs to VirtIO to improve throughput and reduce CPU overhead.
Cleanup: remove legacy files, confirm backups and consider HA
After validation we remove old .vmdk and -flat.vmdk files to free storage and avoid confusion. Update backup policies in Proxmox Backup Server and test live-restore workflows.
- Check virtual disk trim/discard and queue settings on thin storage.
- Plan HA only when shared storage and a dedicated Corosync network are in place.
- Record the final hardware and configuration baseline for each VM.
“Note: keep logs, rescans and escalation paths documented so support teams can act quickly.”
Conclusion
Final validation and clear runbooks turn a successful transfer into sustained efficiency.
We close with a clear summary: planned migration steps, the right methods and disciplined execution deliver a predictable vmware proxmox outcome for your business.
Start with a pilot and confirm operating system behaviour, performance and management fit before scaling across more machines. Adopt new proxmox standards — VirtIO controllers, QEMU agent and defined storage classes — for better throughput and simpler operations.
We verify server and host policies, backups and monitoring, and document how storage tiers map to workloads and disk growth. Both manual and wizard-driven approaches are valid; choose by scale, timing and governance.
We supply a concise runbook, administrator training and cleanup guidance for files and images. When application teams sign off, we capture tuning in baselines and stand ready to help with HA, automation or cloud integration.
FAQ
What are the main benefits for Australian teams switching from VMware ESXi to Proxmox VE?
We gain lower licensing costs, greater control and open-source flexibility. Proxmox VE combines KVM and LXC, integrates backup and clustering, and supports common storage back ends like ZFS and Ceph — making it a strong option for campus and branch deployments across Australia.
What downtime should we plan for during the migration process?
Downtime depends on your chosen method. Using the Import Wizard or offline file copy requires VM shutdown and can take minutes to hours per machine depending on disk size and network speed. Live-import tools reduce interruption but still carry risk — test on a sample VM first and schedule maintenance windows.
What must we do before touching any disks or starting migration?
Take full backups and snapshots, verify backup integrity, and document current disk layouts. Ensure VMs are powered off for offline methods, disable vTPM and VM encryption, and confirm ESXi host SSH and datastore access. Record NIC settings and prepare temporary DHCP or static IP plans to avoid conflicts.
Which Proxmox target settings deliver best performance for VMs?
Use VirtIO drivers for NICs and disk IO, enable VirtIO SCSI with single queue and IO threads for high throughput, and configure memory ballooning sensibly. For Windows guests, install VirtIO drivers and the QEMU guest agent after first boot to improve performance and management.
What prerequisites are needed to use the Proxmox integrated Import Wizard?
Use Proxmox VE 8+ and install pve-esxi-import-tools from the appropriate repository. Ensure the Proxmox host can access ESXi datastores via network, and that you have ESXi credentials with datastore read access. Validate VMX and VMDK files are intact before starting.
How do we add ESXi storage to Proxmox for import?
In the Proxmox GUI go to Datacenter → Storage → Add → ESXi, then provide ESXi host details and credentials. This exposes the datastore so the import wizard or manual copy can access VMDK files. Confirm network reachability and firewall rules first.
What should we consider when choosing live-import versus offline import?
Live-import lowers downtime but can miss in-memory state and may need additional post-migration checks. Offline import (VM shutdown) is simpler and safer for data integrity. Choose live only for stateless or high-availability setups — otherwise prefer offline and schedule maintenance windows.
For manual migration, how do we copy VMDK files from ESXi to Proxmox?
Enable SSH on the ESXi host, locate the VM datastore paths, and use SCP/WinSCP or rsync to transfer .vmdk and -flat.vmdk files to the Proxmox host. Transfer over a secure and fast network — consider brief maintenance windows to avoid disk churn during copy.
Once VMDKs are on Proxmox, how do we convert and attach them?
Use qm importdisk or qemu-img to convert VMDK to raw or qcow2 depending on your storage choice. Then attach the converted disk to the VM, set the proper controller (VirtIO SCSI recommended), and update the boot order and BIOS/UEFI settings to match the source machine.
Which storage types should we consider for performance and resilience?
Choose between file and block: directory or NFS for simplicity; ZFS for local redundancy and snapshots; Ceph RBD for scalable distributed storage; LVM for block-based setups. Match storage to workload — high IOPS workloads benefit from fast disks and tuned ZFS or Ceph configurations.
How do snapshots and backups differ on Proxmox compared with ESXi?
Proxmox supports snapshots at the storage level for ZFS and some back ends, but for reliable long-term protection use Proxmox Backup Server. PBS provides efficient deduplication and catalogued restores — it’s the recommended alternative to relying on VM snapshots alone.
What networking changes are required after import?
Map ESXi port groups and VLANs to Proxmox bridges (vmbr). Consider bonds for redundancy and LACP where supported. Replace ESXi NIC models with VirtIO in guest settings, and verify VLAN tags and firewall rules to maintain connectivity for campus and branch networks.
What checks should we run on first boot of the migrated VM?
Confirm boot order and that the correct disk and controller are selected. Install VirtIO drivers and the QEMU guest agent, verify IP addressing and DNS, and test application functionality. Remove any legacy VMDK files only after backups are confirmed.
Are there SAN, iSCSI or multipath considerations for Australian data centres?
Yes — ensure proper multipath configuration for iSCSI/FC, align LUN presentation and device naming, and test failover scenarios. Work with your data centre or SAN vendor to match best practice for latency and redundancy across AU sites.
How do we handle BIOS versus UEFI and boot modes during migration?
Match the guest’s original firmware type. Convert disk partitioning if required (MBR ↔ GPT) and set the Proxmox VM to BIOS or UEFI accordingly. Incorrect firmware choice can prevent boot — test and adjust boot order and firmware settings as needed.
What are common post-migration cleanup tasks?
Remove leftover VMDK files from ESXi after decommission, update monitoring and backup configurations, enable HA if needed, and document the new topology. Validate backups with test restores and consider performance tuning based on observed metrics.
Which tools and versions are recommended for a smooth transition?
Use recent Proxmox VE 8+ releases, pve-esxi-import-tools, QEMU utilities and Proxmox Backup Server. Keep guest VirtIO drivers current for Windows and install the QEMU guest agent on Linux. Test tools in a lab before production migration.


Comments are closed.