there are alot of nfs guides out there. a quick google search will reveal what you likely need. there are differences between arch and .deb based package names so take note there. first you will install nfs on the proxmox host (i think this is one of maybe 2 other packages i’ve installed on my PVE host), then the vm. now is also a great time to setup the directory or drives you wanna share. i had all these disks just sitting in the datacenter > node > disks area. 6x18tb 12gb/s 3.5″ sas drives to be exact. the first thing is to make them into something you can share. zfs is a great util for expansion and has raid like redundancy, so i setup a new zfs disk from the zfs tab > create zfs. add your desired disks, give it a name and click add. PVE will build the array and let you know when its ready. make your directory where you want it…mines off root, too many random things get stuck in /mnt so mines just z*** and I mount it on /z*** . then you need to pull that into the fstab on your vm running jellyfin. simple nfs-utils install then mount with something like the following: 192.168.xxx.xxx:/z*** /z*** nfs rw,defaults,nofail 0 0 adjust to your system ofc
another important thing to note here is file permissions. i had to enable the uid and gid for the folder i’m sharing via nfs on the host prior to pulling it into the client and moreover, the uid and gid had to match those on the jellyfin vm and also those permissions in my docker where the arrs live. this is crucial for downloading, file transfers naming and what ever. the simplest way for me to do this was to rsync my media files over from my synobox to the zfs share on the host, then chmod 775 the whole directory – chmod 775 -R /z***/ and chown ***:*** -R /z***/ with the chown matching my uid and gid on the client docker and jellyfin.
one more whoopsie you don’t want to make is setting the vm client to auto-sign in. for whatever reason this group thats assigned to allow it to function steals the gid 1000 thats important for setting the future permissions – ask me how i found that out, the only fix was to revert back to the og vm snapshot and start over. so the game plan becomes uid and gid are set to 1000:1000 by the client vm on inital setup. those permissions are forced on the nfs server in the nfs export share and set to the directories/files before being connected to the client. once connected to the client, they match the clients permissions, then you will pass those same permissions to the docker-compose files and thus the containers for the arrs. if it sounds like a pain in the ass it is. but that careful planning will save you headache later and enables atomic copying.
once you have these hurdles out of the way, existing media will just populate when you add the root folders throughout the arrs and the jellyfin server. atomic copy functions out of the box and everything just works. you can use that zfs share on any other vm you create keeping in mind to have a user (not root) with those same uid and gids. the only hick-up that i’ve found is if you wanna add an additional arr container you will need to add the folder structure and set permissions on the host before starting that container or you will fail at the container creation. not a huge deal, but i would be strategic in sorting out the file structure as defined in the trash guides. i also have a depricated /z***/docker/appdata folder for the configs and storage of all my arrs docker containers and those permissions also need to be matched. however linking back to that allows permanence for updates in the docker containers without loosing configs…a huge benefit for all this leg work.
this is a long rant, but probably the most intricate piece to the puzzle of having all the permissions just work for the docker arrs containers, jellyfin server and the nfs shares, with gpu passthrough and transcoding. if thats all you are trying to do this works great. however, as i’ve found. the proxmox host is capable of many things, so i moved onto the next applications after i could confirm that everything was working. time for nginx npm + and vaultwarden to move over from my desktop (testing server).