Hi all. I’m hoping to get some help from folks with more Linux experience than me. I’m not a Linux noob, but I’m far from an expert, and I have some huge gaps in my knowledge.

I have a Synology NAS that I am using for media storage, and I have a separate Linux server that is using that data. Currently the NAS is mounted with samba. it automatically mounts at boot via an entry in /etc/fstab. This is working okay, but I don’t like how samba handles file ownership. The whole volume mounts as the user who mounts it (specified in fstab for me), and all the files in the volume are owned by that user. So if I wanted two users on my server to have their own directory, I would need to mount each directory separately for each user. This is workable in simple scenarios, but if I wanted to move my Lemmy instance volumes to my NAS, the file ownership of the DB and the pictrs volumes would get lost and the users in the containers wouldn’t be able to access the data.

Is there a way to configure samba to preserve ownership? Or is there an alternate to samba that I can use that supports this?

Edit:

Okay, so I set up NFS, and it appears to do what I want. All of the user IDs carry over when I cp -a my files. My two users can write to directories that I set up for them that are owned by them. It seems all good on the surface. So I copied my whole lemmy folder over and tried to start up the containers, and postgres still crashes. The logs say “Permssion denied” and “chmod operation not permitted” back and forth forever. I tried to log into my container and see what is going on. Inside the container, root can’t access a directory, which is bizarre. The container’s root user can access that directory when I am running the container in my local filesystem. As a test, I tried copying the whole lemmy directory from my local filesystem to my local filesystem (instead of from local to NFS), and it worked fine.

I think this exact thing might be out of the scope of my original question, and I might need to make a post on !selfhosted@lemmy.world instead, as what I wanted originally has been accomplished with NFS.

  • Nitrousoxide@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The requirement of managing an LDAP or AD directory service just to get some auth for NFS is a dealbreaker for like 99% of people. It’s such a dumb protocol for the average user and was designed with only huge corporate clients in mind.

    Just give people a simple password auth or let them exchange private/public keys between the devices that need to connect!

    • 2xsaiko@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You don’t need LDAP or AD. Kerberos is a separate thing and nowhere near as insane as LDAP. Though it’s right that they are often combined (in AD for example). However, it’s also a purely authentication system, so no permission controls or anything except for kadmin, from what I can tell.

      If I’m not forgetting anything, you need to do pretty much 3 things:

      • either set up some DNS entries for autodiscovery of your kdc, or install a config file on each host (you probably want the config file either way to set the default realm so you don’t have to type it when logging in, but DNS makes it optional)
      • set up user principals (you need this for samba too)
      • create a principal for the NFS service

      (Apparently you also need host principals for each machine that wants to connect to NFS, but my macbook can log in and mount the NFS share without a host principal, so maybe not. Still looking into that because I do actually want that for non-home-network purposes.)

      Kerberos is the simple password authentication if you use it by itself. Sure, it does stuff that isn’t needed in a small home network such as multi-realm support, and they could have probably either built another authentication system for NFS like Samba’s, or make something that could authenticate users via SSH, but there’s probably a reason for that not being added until now. I assume it at least partially has to do with system-wide mounts.

      And Kerberos really isn’t that bad. I set it up in under a day and most of that was spent debugging mounting NFS not working (which was finally solved by a reboot of the NFS server, still not sure what that was about >_>).