Breaking News

Journey to Deep Learning: Cuda GPU passthrough to a LXC container

Journey to Deep Learning: Cuda GPU passthrough to a LXC container

Disclaimer : I am not responsible of any data loss, backup your data.

Next, you choose Install Proxmox VE and you come to your first IPMI bug:

The “Accept” button is not there.
I whipped up my trusty VMWare Fusion :

It’ a “I agree button” !! and the shortcut is “Alt + G”

Next is choosing your installation media, don’t choose the wrong target disk.

Filesystem-wise you have default ext4, ext3, xfs and zfs in RAID0, RAID1, RAID10, RAIDZ-1 (equivalent to RAID5), RAIDZ-2 (equivalent to RAID6), RAIDZ-3

Stay with the default ext4, it’s the most used FS, so support is always improving. If you’re curious like me check on Google with a combination of “ext4 performance slow degradation” to make sure your performance/risk ratio is best.
ZFS is very nice for a file server, except that there is no GUI for it in Proxmox
Ext3 is tried and true but it doesn’t support SSD wear leveling and it’s the only choice that doesn’t support extents for large file
XFS is great, except that if you later want to shrink your filesystem you’ll be out of luck like me.
Do your homework on RAID and RAIDZ

Now regarding the rest, you probably want to maximize the space for your Machine Learning VM to hold the GBs of data.
Lesson learned: 256GB is not enough for Machine Learning.
I do have lots of HDD I could use but they will be slow to load/store data, I don’t even want to uncompress a 70GB .zip file on a HDD.
– Swap is what the system use when it’s out of memory, this is useful if you overprovision your VMs and container memory. If there is no more RAM + Swap Linux starts hunter mode and kills ranom process to free memory, you don’t want that to happen in the middle of your computation.
My system currently has 16GB of RAM (and can go up to 64GB). Besides my machine learning VM and Proxmox, the only ones that will be running non-stop won’t need more than 4GB, so 4GB it is.
Regarding maxroot, minfree, etc, documentation is there.

Pages: 1 2 3 4 5 6 7 8

Related Articles

High performance tensor library in Nim

Toward a (smoking !) high performance tensor library in Nim Forewords In April I started Arraymancer, yet another tensor library™,

Data Science Bowl 2017 – Space-Time tricks

Here we are again for the third post on my journey to Deep Learning. Like all super-heroes, the data scientist

Journey to Deep Learning #2: Don’t fight the wrong fights

2 months ago, I took my courage and threw it at Machine Learning. Machine Learning, at least the supervised learning,