Breaking News

Journey to Deep Learning: Cuda GPU passthrough to a LXC container

Journey to Deep Learning: Cuda GPU passthrough to a LXC container

So in the past few months I got serious with machine learning and bought a GTX 1070 for a very nice price ten days ago (Thu, Jan 12):
– Full name – Inno3D Ichill X4 (Herculez Airboss) GTX 1070 for the Bargain price of 350€ taxes included (full price was 470€)

The same day, somehow, the impressive Data Science Bowl 2017 was announcec on Kaggle. It’s a challenge to detect lung cancer from scans from patients with $1M in prizes, just wow !

I happily installed it in my headless server which is powered by Proxmox.
It’s detected and here is my 8 hours journey to pass the proverbial GPU bucket to my Archlinux Machine Learning container.

Now first of all, you are lucky because due to the wow size of the Data Science Bowl 2017 data (70GB 7z file + 140 GB uncompressed), my 256GB ssd was a bit tiny, so I will document a full (re-)installation of Proxmox.

My initial setup had :
– Proxmox rootfs 50GB
– Proxmox “local” lvm (to store iso, backups, container template) 50GB
– LVM thin provisioning (everything else, especially my Machine Learning container)

NAS disk entirely passed through to a NAS virtual machine

Why did I had to reinstall proxmox, couldn’t I just shrink my rootfs and local storage partition ?
Good question, I actually created my Proxmox root partition with XFS filesystem which cannot be shrinked.
I did that for performance reasons after a lengthy review of LKML mailing list and various threads on ext4 performance bug, but I guess I could have the same performance by remove barriers from ext4.

Oh well …

Pages: 1 2 3 4 5 6 7 8

Related Articles

Data Science Bowl 2017 – Space-Time tricks

Here we are again for the third post on my journey to Deep Learning. Like all super-heroes, the data scientist

Journey to Deep Learning #2: Don’t fight the wrong fights

2 months ago, I took my courage and threw it at Machine Learning. Machine Learning, at least the supervised learning,

High performance tensor library in Nim

Toward a (smoking !) high performance tensor library in Nim Forewords In April I started Arraymancer, yet another tensor library™,