rho50

joined 1 year ago
[–] rho50@lemmy.nz 6 points 4 months ago

Power management is going to be a huge emerging issue with the deployment of transformer model inference to the edge.

I foresee some backpedaling from this idea that "one model can do everything". LLMs have their place, but sometimes a good old LSTM or CNN is a better choice.

[–] rho50@lemmy.nz 17 points 4 months ago

Yeah, this is actually a pretty great application for AI. It's local, privacy-preserving and genuinely useful for an underserved demographic.

One of the most wholesome and actually useful applications for LLMs/CLIP that I've seen.

[–] rho50@lemmy.nz 1 points 4 months ago

It would be better to have this as a FUSE filesystem though - you mount it on an empty directory, point the tool at your unorganised data and let it run its indexing and LLM categorisation/labelling, and your files are resurfaced under the mountpoint without any potentially damaging changes to the original data.

The other option would be just generating a bunch of symlinks, but I personally feel a FUSE implementation would be cleaner.

It's pretty clear that actually renaming the original files based on the output of an LLM is a bad idea though.

[–] rho50@lemmy.nz 3 points 5 months ago

If you include ChromeOS that's very likely.

[–] rho50@lemmy.nz 0 points 5 months ago (1 children)

You can restrict what gets installed by running your own repos and locking the machines to only use those (either give employees accounts with no sudo access, or have monitoring that alerts when repo configs are changed).

So once you are in that zone you do need some fast acting reactive tools that keep watch for viruses.

For anti-malware, I don't think there are very many agents available to the public that work well on Linux, but they do exist inside big companies that use Linux for their employee environments. For forensics and incident response there is GRR, which has Linux support.

Canonical may have some offering in this space, but I'm not familiar with their products.

[–] rho50@lemmy.nz 17 points 6 months ago

At least in some circumstances, the risks of sharing your DNA include having children...

[–] rho50@lemmy.nz 96 points 6 months ago* (last edited 5 months ago) (1 children)

Tbf 500ms latency on - IIRC - a loopback network connection in a test environment is a lot. It's not hugely surprising that a curious engineer dug into that.

[–] rho50@lemmy.nz 13 points 6 months ago* (last edited 6 months ago) (1 children)

I don't think it's necessarily a bad thing that an AI got it wrong.

I think the bigger issue is why the AI model got it wrong. It got the diagnosis wrong because it is a language model and is fundamentally not fit for use as a diagnostic tool. Not even a screening/aid tool for physicians.

There are AI tools designed for medical diagnoses, and those are indeed a major value-add for patients and physicians.

[–] rho50@lemmy.nz 1 points 6 months ago (1 children)

Precisely. Many of the narrowly scoped solutions work really well, too (for what they're advertised for).

As of today though, they're nowhere near reliable enough to replace doctors, and any breakthrough on that front is very unlikely to be a language model IMO.

[–] rho50@lemmy.nz 6 points 6 months ago

Exactly. So the organisations creating and serving these models need to be clearer about the fact that they're not general purpose intelligence, and are in fact contextual language generators.

I've seen demos of the models used as actual diagnostic aids, and they're not LLMs (plus require a doctor to verify the result).

[–] rho50@lemmy.nz 27 points 6 months ago (10 children)

There are some very impressive AI/ML technologies that are already in use as part of existing medical software systems (think: a model that highlights suspicious areas on an MRI, or even suggests differential diagnoses). Further, other models have been built and demonstrated to perform extremely well on sample datasets.

Funnily enough, those systems aren't using language models 🙄

(There is Google's Med-PaLM, but I suspect it wasn't very useful in practice, which is why we haven't heard anything since the original announcement.)

[–] rho50@lemmy.nz 89 points 6 months ago (6 children)

It is quite terrifying that people think these unoriginal and inaccurate regurgitators of internet knowledge, with no concept of or heuristic for correctness... are somehow an authority on anything.

view more: next ›