• 2 Posts
  • 88 Comments
Joined 1 year ago
cake
Cake day: July 20th, 2023

help-circle
  • I can’t honestly recall or put my finger on it what I did wrong.

    Choose fedora because it used my laptop subwoofer and wasn’t a rolling release. I remember each time (x2) reading about how to update the distro and each time my system was completely borked. I went to debian, read upon alsa, made my subwoofer work with a homegrown script and never looked back.

    To this day I am wondering if people recommending redhat are trolls or paid.



  • Graphics driver for sc8280xp are already a thing. There are more issues in convenience daily driving linux, currently. From the top of my head:

    • firmware update path
    • dtb update/loading path
    • no virtualization
    • no universal dock compability
    • missing HDMI/DP features

    I suspect that these issues are common between their ARM chips and will be addressed for both chips almost simultaneously. But I have no real idea on kernel development. And their documentation is only shared with linaro so one can only guess.



  • If you run qemu from CLI you get a window which grabs keyboard and mouse automatically. Ctrl+Alt+G (from the top of my head) releases the input devices so you can again navigate the host. The window is otherwise a default window for you display server.

    I find qemu from CLI way more transparent then these GUI-Applications since each vm is a readable, single script. So I recommend this.

    Regarding installation on iMac bare metal: If the kernel supporta virtualization you can expect to work flawlessly. If you have a dedicated graphics card you can only pass this (as well as dedicated devices like hdd’s) if you main board supports IOMMU.

    If it does all you need is the qemu man page to setup your vm.

    Why I prefer a qemu script to any GUI alternative:

    The entire script for passing RAM, GPU and a HDD is about 10 lines max. A default vm with tcg-emulation e.g. via libvirt etc. can pass 50 lines of xml easily.

    I recommend giving it a try. My workflow is: Place the install script in some directory. The default run script is placed in my ~/.bin/ You can combine these scripts but I find it way simpler to separate them (you would need more elaborate options mounting devices).


  • That’s beyond my experience but I would say functional languages can perform similiarly.

    I suppose - and honestly do not know if - aggregation is done via synchronization into some persistance unit.

    Therefore I would eypect that a functional language like Elixir, Lisp etc. would outperform a language with manual memory management in terms of maintainability.

    Depending on the capabilties of packing structs into close memory or traceability and elaboration of compiler it may outperform single or multi-threaded.

    Though outperforming recent JREs may be hard, since they may trace hot paths. Default configuration Java vs. a proficient developer of a functional language I assume that latter at least go even.

    But I can’t judge. Even on the repository of said program I did not even bother to look at the contents of the gradle.build or Dockerfile to be honest.

    I do think that maintainability of functional languages, when only the common denominator between any functional language is used, is better to spaghetti Java source code. But that’s another issue, right?

    // edit: Spaghetti Source Code is a good thing in my opinion. And sincr I did not adsress your question directly: A proficent developer is more likely to write faster Java then functional code, since Java is just a layer above C with one of the best compilers there is. Functional languages require carrying some non-neglectable knowledge of the compiler to make use of the fastest paths through the code. On the other hand Java is just ALGOL-Syntax and therefore imperative; Which translates more easier into *.asm.

    // edit2: Synchronization into some db isn’t depending on the nature of the language but there may be overhead where some concepts of languages simply perform better. So I would expect that transitions from some interpreted language is slower then compiled languages. Note that even though Java belongs to the former it is conceptually compatible with the latter. I’m out. You called me out. I’m a still a newbie. Had to append so much.


  • There is Sublink but it’s written in Java, I don’t think I want to deal with Java’s runtime environment.

    Don’t hate Java just for the sake of it. According to the repository they ship a Dockerfile and use gradle to build it. Everything should be abstracted for you.

    When comparing environments for a program between Java and Python you should probably prefer Java’s. Years of experience and build from the ground up for enterprise deployment. Python module system is hacked together. It ain’t even be fair for python to compare itself in this regard.

    Also this project is spot-on within Java’s main territory. It makes absolutely sense to me to use Java for such a program.

    Plus monitoring/maintaining a Java application is way better then any python program.




  • It is bearable but feature complete. Every month linaro and the community add functionality. The most recent things include a custom power-domain mapper implementation and apparently camera support.

    If you are running wayland you can simply install any os and its working oob.

    The laptops weight and heat production is awesome. Very practical. Also the body is exceptional sturdy and worth mentioning (even in comparsion to a T14, e.g.).

    But:

    • external monitors are not detected at boot
    • no hibernation
    • battery time is very depended on the task. It ranges from 4 to 13 hours.
    • no virtualization support, so one is stuck with tiny code generator runtime when using kvm
    • audio is pretty quiet, so depending on the environment an external source is required.

    I followed almost all patches on the lkml. It appears to me that the upcoming chip can benefit from the sc8280xp hugely. It sufficies for my use cases but I promised myself a little better, yet.








  • The EU will already have projects in development as far as my experience goes.

    What I do not know though but think applies: Such an act is legally binding for all member states. If they fight these things, they are allowed to propose at the EU court for adjustment in order to be aligned with the national law. This can postpone the national implementation for a few years.

    But it can only be revoked by a new act of the EU council.

    And they can simply ignore any new suggestion of the EU parliament if they like to.


  • The Debian community not already maintains a Chromium fork. How much does that cost?

    I honestly can’t and wouldn’t judge: Time, Resources, implicit know-how etc. are unknown to me.

    The human time needed should grow with the number of patches that need to be applied to the upstream code base, …

    jupp

    … because some will fail now and then.

    Forks are done due to different reasons. Therefore it depends why to fork. It could be possible that one feature diverges so much that applying patches isn’t enough. Especially patches in a debian sense, neither .diff/.patch-patches.

    This is what I refer to as “fatness” of the fork. The more patches, the fatter. It should be possible to build, packege and publish a fork with zero patches without human intervention, after the initial automation work.

    For a brief period, until something rattles on the build system. Debian patches are often applied to remove binary blobs due to licensing - Imagine upstream chooses to include M$ Recall into the render engine. You would need to apply extraordinary amounts of work. Maybe even maintaining a complete separate implementation. This would also imply changes on the build systems, which needs to get aligned continiously between both upstreams, now.

    Maybe I’m missing something obvious. 😅

    With each version you have to very carefully review every commit if you want to maintain compatability with upstream, in order to merge patches into your fork.

    When there are 50 devs working on upstream and you need to review every commit to assure requirement X, this alone is a hard path. If you need to also apply workarounds compatible with future versions of upstream, you need PROFESSIONALS. Luckily these are found in the FOSS community; But they are underpaid and worse: underappreciated.

    // plus I could imagine that things like chrome may even not be coming with the full test suite. The test suite of a browser are surely so huge I can’t even comprehend the effort put into it. And then bug tickets… Upstream says: Not in my version. Now the fork has to address these themselves! :)


  • It does not depend in how fat the fork is. You provide some reasons on your own.

    Your assumption appears to be that open source software can be maintained with minimal costs by the community and sofware-aid assures an ongoing bug prevention of some sort.

    In the end you still need at least a few full-time devs on it. It would be fair to pay them accordingly if they are maintaining behemoths of software.

    Funfact: Infrastructure costs are x-times higher then IT Personel in my organization. A big chunk of it is energy and space; But its less then licensing costs…