Tech Corner

The August FOTM Contest Poll is open!
FishForums.net Fish of the Month
🏆 Click to vote! 🏆

Most of the time I use a new drive and preserve my working stuff at any price.

In this case you are better installing with the onboard controller even if slower... It' will work a lot faster if you use the fast one for working stuff, temp and paging file...

And you can bring it to wear the fast drive and save your os support from a lot of temp writes.

I used ssd this way with hdds and it really worth it.
 
And in regard of security, if you think UEFI is better than BIOS you have a real problem...

There is no interest in hacking a basic I/O system... What about something that can run code instead...

I honestly tried to have boards bioses "root kits" catch and they failed to infect... And if succeed, cleared with a jumper move.

UEFI is another story. The things I find makes me think a bios boot device is the most secure way of booting a x86/64 os... lolll
 
Another Server Update...

I'm a genius... I was able to put all the UEFI boot partitions of all Vms in the same virtual mirror drive. So there is only one fat in the whole system.

Truenas installation was a lot easier than Proxmox and the Poweredge Bios. I already have mapped array with direct IOMMU and paravirtualized core drivers...

All settings are available for the same direct pcie lanes access with up to 3 x16 GPU and I have still 4x8 free loll.

All the basics are there, the CPU's never reached 4% at peak, lollll... I'm left with 384G ram, 36 CPU and 1TB for the AI boot drive. 4TB raid0 for thinking space and whatever GPU I add...

This is where it gets tough. The interesting models requires beasty cards for real. These server casing have very poor dispositions to accommodate a 48 gig ram video card. So atm the most fitting card is just a little lot under specs.

I talked to my NVidia connections and going to be directed toward high density computing GPU division.

I cant wait to see the price lolll.
 
Dug my old NUC out of the spares box yesterday and fired it up. Fairly modest Gen 4 i3 with 12GB ram and a 512G SATA SSD.
Running Win10 Pro just fine but I don't use it. So I thought I'd transfer the OEM licence to a VM on my laptop and upgrade to W11, cos ideally I need 2 PCs when I am away from home for work.

Laptop has Win11 Home so I installed VirtualBox and did a clean install of Win11 Pro. Spec is OK but nothing special. Gen 8 i7, 16GB ram and 2TB nVME, all in a Dell XPS 13. Only it runs like a pig!
Virtualisation turned on in BIOS
Windows Virtualisation turned off (it was much worse with it turned on)
VM has 8GB ram and 4 processors (VB initially suggested 1, I tried 2 but processor was permanently at 100%)
Fixed Disk (VHD) 512GB on the main nVME
Latest version of extension pack installed

The VM is pretty much unuseable even in offline mode. I know wifi adaptors don't play nicely with virtual adaptors (I'm in bridged mode) but that doesn't seem to be the issue.
Right now my Azure Virtual Desktop seems lightning fast by comparison on the same machine, and I'm getting much better performance remoting in to a VM running on my desktop. I have used VB on this machine a few years ago and had no issues with Win 10 or Linux guests.
Performance requirements are quite low, I will only use it for supporting SWMBO's office system so I pretty much only need browsing, but do need to connect to their AzureAD. So far I have been managing using a VPN connection to home and remoting in to a VM running on my desktop.

Thankfully I haven't transferred the licence yet! Any obvious suggestions for getting acceptable performance before I blow it away and return to remoting.
 
Couple of tweaks and it now seems usable. Down to 25 seconds from power on to login screen.
This takes <5 seconds on the desktop with Hyper-V :whistle:
Network download speed around 50% of the host and 33% for uploads. Only about 10% effect on the host when the VM is running.

I'll leave Intune to do its thing and I have a month to decide if its worth activating Windows. I expect I'll see a small additional gain once it has set itself up and I can start it headless, using RDP instead of the VBox gui.
 
Last edited:
Couple of tweaks and it now seems usable. Down to 25 seconds from power on to login screen.
This takes <5 seconds on the desktop with Hyper-V :whistle:
Network download speed around 50% of the host and 33% for uploads. Only about 10% effect on the host when the VM is running.

I'll leave Intune to do its thing and I have a month to decide if its worth activating Windows. I expect I'll see a small additional gain once it has set itself up and I can start it headless, using RDP instead of the VBox gui.
Might help others if you describe the tweaks.
 
Most significant were
  • changed the chipset for the VM's MB to ICH9
  • Disabled 3D acceleration
  • Power setting on host and guest set to max performance
 
Any obvious suggestions for getting acceptable performance before I blow it away and return to remoting.

Get ready for mind blowing performance :cool:.

The latest version of VBox support Virtualized I/O, If you motherboard supports Intel Vt-x and Vt-d your CPU does.

Create a new empty Machine for Windows... Get into the settings...

1: System: Chipset:ICH9... Enable I/O apic... Paravirtualization Interface: KVM
2: Storage: Detach the VHD under your SATA Controller.
3: Add a second VCD to the SATA controller.
3: Add A VirtIO-SCSI Controller... Attach back your VHD connected to the VirtIO-SCSI and select Solid state drive for the VHD options.
4: Load Your Windows ISO in the first VCD.
5: Download VirtIO ISO Drivers file from https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/?C=M;O=D Insert the ISO in your second VCD.
6:: Disable audio card.
7: Network adapter: Bridge, Select network connected to your LAN... Adapter type: Paravirtualized Network (VirtIO-Net)
8: Adjust boot order to boot from Windows installer. Nothing to do if UEFI.
9: USB: Select USB2 Controller.

When windows installer will reach the Drive Selection for installation. Click load drivers... Go to VirtIO CD, browse to VioSCSI folder and load the Disk Drivers for your version of Windows.

Once Windows complete the installation configure the network VirtIO drivers and Balooning drivers... There is also a guest client.

If you manage to get it up that way, you will have direct I/O for both your disk drive and network adapters paravirtualized in your VM.

I don't know how faster it will move for you... But it's a lot better than layered virtualiztion in my computer.

Edit: Typos and small corrections...
 
Last edited:
I'll give it a go later. As I said performance is now acceptable but not great. Starting in headless mode didn't give a performance boost, according to the docs it doesn't do anything different to "turn off the monitor", i.e. no window (or is that just hidden window). I had the sound card enabled and was kind of surprised that I still got the logon sound after booting headlessly. The interface is smoother using mtsc and its more like Windows - so I'll stick with that.
On the plus side my Intune deployment worked flawlessly once I attached to AzureAD so I had a fully configured, installed and debloated system without having to get my hands dirty.
 
When windows installer will reach the Drive Selection for installation. Click load drivers... Go to VirtIO, CD browse to Viostor folder and load the Disk Drivers for your version of Windows.
I'm failing to load the drivers here. The WIndows installer shows them as incompatable (I have to untick the Hide incompatable drivers checkbox to see it) - then I see it twice! Tried using the latest ISO as well as the previous version :sad:
Edit: If I switch to the previous version of the win installer it will (apparrently) install the SCSI passthrough driver. After this it can see my VHD but still says it can't install to it because there are no drivers for it :mad:
 
Last edited:
What version of Windows are you installing ? I just tried W11 23H2 on VBox 7.1.12 and the files are copying atm

But 24H2 did not find the drivers indeed :mad:
 
I'm failing to load the drivers here. The WIndows installer shows them as incompatable (I have to untick the Hide incompatable drivers checkbox to see it) - then I see it twice! Tried using the latest ISO as well as the previous version :sad:
Edit: If I switch to the previous version of the win installer it will (apparrently) install the SCSI passthrough driver. After this it can see my VHD but still says it can't install to it because there are no drivers for it :mad:

Just to make sure I corrected that in the post the directory is VioSCSI not VioStor... My error... VioStor is For HBA Block Devices
 
It's 24H2.
I tried installing the latest version directly into a running instance. It installed OK and Windows recognises the controller and reports its working correctly, but when I moved the disk it failed to boot.
Running system restore now and I'll try again with 0.1.229 which according to ChatGPT is know to work ...
watch this space
 
And we have liftoff. Since the virtio tools did not create restore point or a working uninstaller (naughty naughty) I'm now waiting on Intune to finish doing its stuff and dthen I'll have a look at perfomance, which will be just perception as I never ran any benchmarks.
 
T'was worth a try but I have reverted.
Could no longer boot in under a minute :mad:
It seems 24H2 is the most likely factor but that's what my users are on. Issues first reported over a year ago and common to Windows / Linux hosts and VBox / QEMU, with no apparent solution. Putting the disk back on the SATA controller restored boot time to <20 seconds, although Winsat only showed about a 25% improvment in read / write speed and no diff in latency.
Might revisit one day if I'm bored but for now not worth spending any more time.
 

Most reactions

Back
Top