Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP). He/him.

(header photo by Brian Maffitt)

  • 0 Posts
  • 42 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
rss
  • It covers the breadth of problems pretty well, but I feel compelled to point out that there are a few times where things are misrepresented in this post e.g.:

    Newegg selling the ASUS ROG Astral GeForce RTX 5090 for $3,359 (MSRP: $1,999)

    eBay Germany offering the same ASUS ROG Astral RTX 5090 for €3,349,95 (MSRP: €2,229)

    The MSRP for a 5090 is $2k, but the MSRP for the 5090 Astral – a top-end card being used for overclocking world records – is $2.8k. I couldn’t quickly find the European MSRP but my money’s on it being more than 2.2k euro.

    If you’re a creator, CUDA and NVENC are pretty much indispensable, or editing and exporting videos in Adobe Premiere or DaVinci Resolve will take you a lot longer[3]. Same for live streaming, as using NVENC in OBS offloads video rendering to the GPU for smooth frame rates while streaming high-quality video.

    NVENC isn’t much of a moat right now, as both Intel and AMD’s encoders are roughly comparable in quality these days (including in Intel’s iGPUs!). There are cases where NVENC might do something specific better (like 4:2:2 support for prosumer/professional use cases) or have better software support in a specific program, but for common use cases like streaming/recording gameplay the alternatives should be roughly equivalent for most users.

    as recently as May 2025 and I wasn’t surprised to find even RTX 40 series are still very much overpriced

    Production apparently stopped on these for several months leading up to the 50-series launch; it seems unreasonable to harshly judge the pricing of a product that hasn’t had new stock for an extended period of time (of course, you can then judge either the decision to stop production or the still-elevated pricing of the 50 series).


    DLSS is, and always was, snake oil

    I personally find this take crazy given that DLSS2+ / FSR4+, when quality-biased, average visual quality comparable to native for most users in most situations and that was with DLSS2 in 2023, not even DLSS3 let alone DLSS4 (which is markedly better on average). I don’t really care how a frame is generated if it looks good enough (and doesn’t come with other notable downsides like latency). This almost feels like complaining about screen space reflections being “fake” reflections. Like yeah, it’s fake, but if the average player experience is consistently better with it than without it then what does it matter?

    Increasingly complex manufacturing nodes are becoming increasingly expensive as all fuck. If it’s more cost-efficient to use some of that die area for specialized cores that can do high-quality upscaling instead of natively rendering everything with all the die space then that’s fine by me. I don’t think blaming DLSS (and its equivalents like FSR and XeSS) as “snake oil” is the right takeaway. If the options are (1) spend $X on a card that outputs 60 FPS natively or (2) spend $X on a card that outputs upscaled 80 FPS at quality good enough that I can’t tell it’s not native, then sign me the fuck up for option #2. For people less fussy about static image quality and more invested in smoothness, they can be perfectly happy with 100 FPS but marginally worse image quality. Not everyone is as sweaty about static image quality as some of us in the enthusiast crowd are.

    There’s some fair points here about RT (though I find exclusively using path tracing for RT performance testing a little disingenuous given the performance gap), but if RT performance is the main complaint then why is the sub-heading “DLSS is, and always was, snake oil”?


    obligatory: disagreeing with some of the author’s points is not the same as saying “Nvidia is great”



  • I think you’ve tilted slightly too far towards cynicism here, though “it might not be as ‘fair’ as you think” is probably also still largely true for people that don’t look into it too hard. Part of my perspective is coming from this random video I watched not long ago which is basically an extended review of the Fairphone 5 that also looks at the “fair” aspect of things.

    Misc points:

    • In targeting Scope 2 emissions they went with renewables to get down to 0 Scope 2 emissions. (p13)
    • In targeting Scope 3 emissions they rejigged their transportation a little (ocean freight instead of flying, it sounds like?) to reduce emissions there. (p14)
    • In targeting Scope 3 emissions they used an unspecified level of renewable energy in late manufacturing with modest claimed emissions reductions. (p14)
    • Retired some carbon credits, which, yes, are usually not as great as we would like, but still. (p14)
    • They may have some impact by choice of supplier even when they don’t necessarily directly spend extra cash on e.g., higher worker payments.
    • They may have some impact by engaging with suppliers. They provide small-scale examples of conducting worker satisfaction surveys via independent third party which seemed to provide some concrete improvements (p30) and “supporting” another supplier in “implementing best practices for a worker-management safety committee” (p30).
    • They’re reducing exposure to hazardous chemicals in final assembly, and according to them they are “the first company to start eliminating CEPN’s second round priority chemicals” (p31). I don’t know much about this.
    • With partners, they “organize school competitions in which children are educated about […] e-waste” (p40).
    • They’re “building local recycling capacity” in Ghana by “collaborating” with recycling companies (p40).
    • Extremely high repairability (with modest costs for replacement parts that make it financially sensible to repair instead of replace) keeps more phones in use, reducing all the bad parts of having to manufacture brand new phones.
    • The ICs make up a huge portion of the environmental costs of the phone (both with the FP4 (pp 40-41) and with the FP5 (p10)), and Fairphone isn’t big enough to get behemoth chip manufacturers to change their processes (though apparently they’re lobbying Qualcomm for socketable designs, as unlikely as that is to happen any time soon). If you accept the premise that for around half of the phone they have almost no impact on in terms of the manufacturing side, it makes their efforts on the rest a bit better, I guess?

    So yes, they are a long way from selling “100% fair” phones, but it seems like they’re inching the needle a bit more than your summary suggests, and that’s not nothing. It feels like you’ve skipped over lots of small-yet-positive things which are not simply “low economy of scale manufacturing” efforts.





  • So they literally agree not using an LLM would increase your framerate.

    Well, yes, but the point is that at the time that you’re using the tool you don’t need your frame rate maxed out anyway (the alternative would probably be alt-tabbing, where again you wouldn’t need your frame rate maxed out), so that downside seems kind of moot.

    Also what would the machine know that the Internet couldn‘t answer as or more quickly while using fewer resources anyway?

    If you include the user’s time as a resource, it sounds like it could potentially do a pretty good job of explaining, surfacing, and modifying game and system settings, particularly to less technical users.

    For how well it works in practice, we’ll have to test it ourselves / wait for independent reviews.


  • It sounds like it only needs to consume resources (at least significant resources, I guess) when answering a query, which will already be happening when you’re in a relatively “idle” situation in the game since you’ll have to stop to provide the query anyway. It’s also a Llama-based SLM (S = “small”), not an LLM for whatever that’s worth:

    Under the hood, G-Assist now uses a Llama-based Instruct model with 8 billion parameters, packing language understanding into a tiny fraction of the size of today’s large scale AI models. This allows G-Assist to run locally on GeForce RTX hardware. And with the rapid pace of SLM research, these compact models are becoming more capable and efficient every few months.

    When G-Assist is prompted for help by pressing Alt+G — say, to optimize graphics settings or check GPU temperatures— your GeForce RTX GPU briefly allocates a portion of its horsepower to AI inference. If you’re simultaneously gaming or running another GPU-heavy application, a short dip in render rate or inference completion speed may occur during those few seconds. Once G-Assist finishes its task, the GPU returns to delivering full performance to the game or app. (emphasis added)






  • Thanks for so politely and cordially sharing that information


    edit: I would be even more appreciative if it were true: https://www.rockpapershotgun.com/rocket-league-ending-mac-and-linux-support-because-they-represent-less-than-0-3-of-active-players

    Quoting their statement:

    Regarding our decision to end support for macOS and Linux:

    Rocket League is an evolving game, and part of that evolution is keeping our game client up to date with modern features. As part of that evolution, we’ll be updating our Windows version from 32-bit to 64-bit later this year, as well as updating to DirectX 11 from DirectX 9.

    There are multiple reasons for this change, but the primary one is that there are new types of content and features we’d like to develop, but cannot support on DirectX 9. This means when we fully release DX11 on Windows, we’ll no longer support DX9 as it will be incompatible with future content.

    Unfortunately, our macOS and Linux native clients depend on our DX9 implementation for their OpenGL renderer to function. When we stop supporting DX9, those clients stop working. To keep these versions functional, we would need to invest significant additional time and resources in a replacement rendering pipeline such as Metal on macOS or Vulkan/OpenGL4 on Linux. We’d also need to invest perpetual support to ensure new content and releases work as intended on those replacement pipelines.

    The number of active players on macOS and Linux combined represents less than 0.3% of our active player base. Given that, we cannot justify the additional and ongoing investment in developing native clients for those platforms, especially when viable workarounds exist like Bootcamp or Wine to keep those users playing.







  • “Comma-la” unfortunately doesn’t help much for people without US accents lol (though of course people in the US are who the question and answer are most relevant to). On first reading – without the accent or something close to it – it implies “kom-uh-luh”, whereas with the accent it implies something more like “kah-muh-luh”, just based on how people pronounce “comma” differently.


  • Intel fumbled hard with some of their recent NICs including the I225-V,[1][2] which took them multiple hardware revisions in addition to software updates to fix.

    AMD also had to be dragged kicking and screaming to support earlier AM4 motherboard buyers to upgrade to Ryzen 5000 chips,[3][4] and basically lied to buyers about support for sTRX4, requiring an upgrade from the earlier TR4 to support third-gen Threadripper but at least committing to “long-term” longevity in return.[5][6] They then turned around and released no new CPUs for the chipset platform, leaving people stranded on it despite the earlier promises.[7]

    I know it’s appealing to blindly trust one company’s products (or specific lineup of products) because it simplifies buying decisions, but no company or person is infallible (and companies in particular are generally going to profit-max even at your expense). Blindly trusting one unfortunately does not reliably lead to good outcomes for end-users.


    edit: “chipset” (incorrectly implying TRX40) changed to “platform” (correctly implying sTRX4); added explicit mention of “AM4” in the context of the early motherboard buyers.