this post was submitted on 04 Jun 2024
2 points (100.0% liked)
Technology
59672 readers
3163 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Hilarious to me that it OCRs the text. The text is generated by the computer. It's almost like when Lt. Cmdr. Data wants to get information from the computer database, so he tells the computer to display it and just keeps increasing the speed
there are way more efficient means of getting information from A to B than displaying it, imaging it, and running it though image processing!
I totally get that this is what makes sense, and it's independent of the method/library used for generating text, but still...the computer "knows" what it's displaying (except for images of text), and yet it has to screenshot and read it back.
It happens the same on android for some reason
Like 5-8 years ago the google assistant app was able to select and copy text from any app when invoked, I think it was called “now on tap”. Then because they’re google and they’re contractually obligated to remove features after some time, they removed this from the google app and integrated it in the pixel app switcher (and who cares if 99% of android users aren’t using a pixel, they say). The new implementation sucks, as it does ocr instead of just accessing the raw text…
It only works fine with us English and not with other languages. But maybe it’s ok as it seems that google’s development style is us-centric
Now on Tap also used OCR. Both Google Lens and Now on Tap get the same bullshit results on any languages that are not Latin. Literally, Ж gets read as >|< by both exactly the same.
They changed it, in the beginning it was using the text and not ocr
For example this app could be set as assistant and get the raw text https://play.google.com/store/apps/details?id=com.weberdo.apps.copy
But only the app set on system as assistant can do it
I was very disappointed when they changed it around 2018 as it produced garbage in my language when it was working so good…
Having worked on a product that actually did this, it's not as easy as it seems. There are many ways of drawing text on the screen.
GDI is the most common, which is part of the windows API. But some applications do their own rendering (including browsers).
Another difficulty, even if you could tap into every draw call, you would also need a way to determine what is visible on the screen and what is covered by something else.
Text from OCR is one kind of match. Recall also runs visual comparisons with the image tokens stored.
Hey, yeah… why aren’t they just tapping the font rendering DLL?
…are they tapping the front rendering dll??
My guess is that they looked at their screen reader API, saw that it wasnt 100% of the text on screen and said fuck it! Were using OCR!
That's the thing, it doesn't really know what it's displaying. I can send a bunch of textboxes, but if they're hidden, or drawn off-screen, or underneath another element, then they're not actually displayed.
To be fair, Data was designed to be like a human, and was made in the image of his creator. He has a number of design decisions that are essentially down to his creator wanting to create something like a human. Including that which you describe.
Data was never intended to work like a PC, it's very normal that he can't just wirelessly interface with stuff.