

Unfortunately it’s hard for the rest of us to tell if you actually think you want a video to save you from having to read 18 sentences or if you’re just taking the piss lol
Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP). He/him.
(header photo by Brian Maffitt)
Unfortunately it’s hard for the rest of us to tell if you actually think you want a video to save you from having to read 18 sentences or if you’re just taking the piss lol
For platforms that don’t accept those types of edits, the link OP tried to submit: https://www.theverge.com/news/690815/bill-gates-linus-torvalds-meeting-photo
That video of them interviewing people on the street with it was pretty fun!
So they literally agree not using an LLM would increase your framerate.
Well, yes, but the point is that at the time that you’re using the tool you don’t need your frame rate maxed out anyway (the alternative would probably be alt-tabbing, where again you wouldn’t need your frame rate maxed out), so that downside seems kind of moot.
Also what would the machine know that the Internet couldn‘t answer as or more quickly while using fewer resources anyway?
If you include the user’s time as a resource, it sounds like it could potentially do a pretty good job of explaining, surfacing, and modifying game and system settings, particularly to less technical users.
For how well it works in practice, we’ll have to test it ourselves / wait for independent reviews.
It sounds like it only needs to consume resources (at least significant resources, I guess) when answering a query, which will already be happening when you’re in a relatively “idle” situation in the game since you’ll have to stop to provide the query anyway. It’s also a Llama-based SLM (S = “small”), not an LLM for whatever that’s worth:
Under the hood, G-Assist now uses a Llama-based Instruct model with 8 billion parameters, packing language understanding into a tiny fraction of the size of today’s large scale AI models. This allows G-Assist to run locally on GeForce RTX hardware. And with the rapid pace of SLM research, these compact models are becoming more capable and efficient every few months.
When G-Assist is prompted for help by pressing Alt+G — say, to optimize graphics settings or check GPU temperatures— your GeForce RTX GPU briefly allocates a portion of its horsepower to AI inference. If you’re simultaneously gaming or running another GPU-heavy application, a short dip in render rate or inference completion speed may occur during those few seconds. Once G-Assist finishes its task, the GPU returns to delivering full performance to the game or app. (emphasis added)
Would be curious to read the LLM output.
It looks like it’s available in the linked study’s paper (near the end)
The process described and shown by the screenshots does seem a bit much for a cancellation. Suing feels disproportionate when I first hear it, but are there many other recourses to force it to become more user friendly?
Fair point, but I guess I would hope that the person being paid to write the copy would check it, since getting that right seems like it’s part of their job description ¯\_(ツ)_/¯
Or 53.6 degrees Fahrenheit if you believe whoever wrote the page for Nissan lmao. I guess they just typed it into a converter with no context, and the converter spat out an answer amounting to “if your thermometer says it’s 12 degrees C, that would be 53.6 degrees F”… but without that context.
Thanks for so politely and cordially sharing that information
edit: I would be even more appreciative if it were true: https://www.rockpapershotgun.com/rocket-league-ending-mac-and-linux-support-because-they-represent-less-than-0-3-of-active-players
Quoting their statement:
Regarding our decision to end support for macOS and Linux:
Rocket League is an evolving game, and part of that evolution is keeping our game client up to date with modern features. As part of that evolution, we’ll be updating our Windows version from 32-bit to 64-bit later this year, as well as updating to DirectX 11 from DirectX 9.
There are multiple reasons for this change, but the primary one is that there are new types of content and features we’d like to develop, but cannot support on DirectX 9. This means when we fully release DX11 on Windows, we’ll no longer support DX9 as it will be incompatible with future content.
Unfortunately, our macOS and Linux native clients depend on our DX9 implementation for their OpenGL renderer to function. When we stop supporting DX9, those clients stop working. To keep these versions functional, we would need to invest significant additional time and resources in a replacement rendering pipeline such as Metal on macOS or Vulkan/OpenGL4 on Linux. We’d also need to invest perpetual support to ensure new content and releases work as intended on those replacement pipelines.
The number of active players on macOS and Linux combined represents less than 0.3% of our active player base. Given that, we cannot justify the additional and ongoing investment in developing native clients for those platforms, especially when viable workarounds exist like Bootcamp or Wine to keep those users playing.
Fair enough! I barely use its social side since most of the games I’ve played on there are singleplayer titles - honestly didn’t even know that wasn’t there yet!
I kinda understand it not being a priority; even if they dedicated the resources to both create and adequately maintain Linux support, I imagine very few of the games on the platform have native support anyway. Sure, many would work (to varying degrees) with the various bags of tricks available, but it’s still an extra step of compatibility that’s sort of beyond their immediate control.
I guess our opinions differ, because I don’t consider either of those to be “basics”. They’re nice features for e.g., Steam to have, sure, but they’re not “game launcher 101” imo.
What do you consider basic that it’s still missing? To be honest I’ve felt content with it as a game launcher for a while now, but I admittedly don’t use it that often either.
FYI: https://github.com/uBlockOrigin/uAssets/issues/9785
So consider using the official website, which is currently: https://lubuntu.me
“Comma-la” unfortunately doesn’t help much for people without US accents lol (though of course people in the US are who the question and answer are most relevant to). On first reading – without the accent or something close to it – it implies “kom-uh-luh”, whereas with the accent it implies something more like “kah-muh-luh”, just based on how people pronounce “comma” differently.
Intel fumbled hard with some of their recent NICs including the I225-V,[1][2] which took them multiple hardware revisions in addition to software updates to fix.
AMD also had to be dragged kicking and screaming to support earlier AM4 motherboard buyers to upgrade to Ryzen 5000 chips,[3][4] and basically lied to buyers about support for sTRX4, requiring an upgrade from the earlier TR4 to support third-gen Threadripper but at least committing to “long-term” longevity in return.[5][6] They then turned around and released no new CPUs for the chipset platform, leaving people stranded on it despite the earlier promises.[7]
I know it’s appealing to blindly trust one company’s products (or specific lineup of products) because it simplifies buying decisions, but no company or person is infallible (and companies in particular are generally going to profit-max even at your expense). Blindly trusting one unfortunately does not reliably lead to good outcomes for end-users.
edit: “chipset” (incorrectly implying TRX40) changed to “platform” (correctly implying sTRX4); added explicit mention of “AM4” in the context of the early motherboard buyers.
There’s currently no implementation (the repos are currently just skeletons), so it could just be a semantics difference right now.
That is indeed the very first criteria listed in the sidebar, despite you being showered in downvotes for saying it.
I think you’ve tilted slightly too far towards cynicism here, though “it might not be as ‘fair’ as you think” is probably also still largely true for people that don’t look into it too hard. Part of my perspective is coming from this random video I watched not long ago which is basically an extended review of the Fairphone 5 that also looks at the “fair” aspect of things.
Misc points:
So yes, they are a long way from selling “100% fair” phones, but it seems like they’re inching the needle a bit more than your summary suggests, and that’s not nothing. It feels like you’ve skipped over lots of small-yet-positive things which are not simply “low economy of scale manufacturing” efforts.