• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: January 26th, 2025

help-circle
  • Look - I can’t prevent my mom from being on facebook and playing candy crush. Nothing I say or do will make that happen. I can improve the situation by:

    • Introducing alternatives and hope they spread (Chat with your mom on Signal)
    • Reducing data harvesting during ”passive” behaviour (e.g. reduced permissions for apps. Graphene is probably the best here, but good luck getting your mom on that)
    • Reducing data harvesting by the phone vendor (Samsung, Google, Apple). This is primarily done by buying an iPhone, simply due to incentives. (Again, good luck getting your mom on Graphene).

    If I go too hard on my mom, she’ll just buy herself a cheap chinese android without telling me. Is that better?






  • Ok, while most of these don’t have companies behind them with huge revenues, most work on these projects is done by paid developers, with money coming from sponsorships, grants, donations and support deals. (Or in the case of Linux - device drivers are a prerequisite for anyone buying your product).

    Developers getting paid to work on open source is a good thing. These projects may have begun their life as small hobby projects - they aren’t anymore. (And that’s probably good)


  • I started experimenting with the spice the past week. Went ahead and tried to vibe code a small toy project in C++. It’s weird. I’ve got some experience teaching programming, this is exactly like teaching beginners - except that the syntax is almost flawless and it writes fast. The reasoning and design capabilities on the other hand - ”like a child” is actually an apt description.

    I don’t really know what to think yet. The ability to automate refactoring across a project in a more ”free” way than an IDE is kinda nice. While I enjoy programming, data structures and algorithms, I kinda get bored at the ”write code”-part, so really spicy autocomplete is getting me far more progress than usual for my hobby projects so far.

    On the other hand, holy spaghetti monster, the code you get if you let it run free. All the people prompting based on what feature they want the thing to add will create absolutely horrible piles of garbage. On the other hand, if I prompt with a decent specification of the code I want, I get code somewhat close to what I want, and given an iteration or two I’m usually fairly happy. I think I can get used to the spicy autocomplete.




  • Fairly significant factor when building really large systems. If we do the math, there ends up being some relationships between

    • disk speed
    • targets for ”resilver” time / risk acceptance
    • disk size
    • failure domain size (how many drives do you have per server)
    • network speed

    Basically, for a given risk acceptance and total system size there is usually a sweet spot for disk sizes.

    Say you want 16TB of usable space, and you want to be able to lose 2 drives from your array (fairly common requirement in small systems), then these are some options:

    • 3x16TB triple mirror
    • 4x8TB Raid6/RaidZ2
    • 6x4TB Raid6/RaidZ2

    The more drives you have, the better recovery speed you get and the less usable space you lose to replication. You also get more usable performance with more drives. Additionally, smaller drives are usually cheaper per TB (down to a limit).

    This means that 140TB drives become interesting if you are building large storage systems (probably at least a few PB), with low performance requirements (archives), but there we already have tape robots dominating.

    The other interesting use case is huge systems, large number of petabytes, up into exabytes. More modern schemes for redundancy and caching mitigate some of the issues described above, but they are usually onlu relevant when building really large systems.

    tl;dr: arrays of 6-8 drives at 4-12TB is probably the sweet spot for most data hoarders.


  • Oh, I fully agree that the tech behind X is absolute garbage. Still works reasonably well a decade after abandonment.

    I’m not saying we shouldn’t move on, I’m saying the architecture and fundamental design of Wayland is broken and was fundamentally broken from the beginning. The threads online when they announced the projects were very indicative of the following decade. We are replacing a bit unmaintainable pile of garbage with 15 separate piles of hardware accelerated soon-to-be unmaintainable tech debt.

    Oh, and a modern server doesn’t usually have a graphics card. (Or rather, the VM you want to host users within). I won’t bother doing the pricing calculations, but you are easily looking at 2-5x cost per seat, pricing in GPU hardware, licensing for vGPUs and hypervisors.

    With Xorg I can easily reach a few hundred active users per standard 1U server. If you make that work on Wayland I know some people happy to dump money on you.


  • enumerator4829@sh.itjust.workstolinuxmemes@lemmy.worldPreference
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 months ago

    The fundamental architectural issue with Wayland is expecting everyone to implement a compositor for a half baked, changing, protocol instead of implementing a common platform to develop on. Wayland doesn’t really exist, it’s just a few distinct developer teams playing catch-up, pretending to be compatible with each other.

    Implementing the hard part once and allowing someone to write a window manager in 100 lines of C is what X did right. Plenty of other things that are bad with X, but not that.


  • Tell me you never deployed remote linux desktop in an enterprise environment without telling me you never deployed remote desktop linux in an enterprise environment.

    After these decades of Wayland prosperity, I still can’t get a commercially supported remote desktop solution that works properly for a few hundred users. Why? Because on X, you could highjack the display server itself and feed that into your nice TigerVNC-server, regardless of desktop environment. Nowadays, you need to implement this in each separate compositor to do it correctly (i.e. damage tracking). Also, unlike X, Wayland generally expects a GPU in your remote desktop servers, and have you seen the prices for those lately?