alt.hn

5/21/2025 at 8:05:13 AM

How we made our OCR code more accurate

https://pieces.app/blog/how-we-made-our-optical-character-recognition-ocr-code-more-accurate

by thunderbong

5/22/2025 at 12:33:20 PM

I can't say I've ever wanted to transcribe code from an image. That seems super niche.

Perhaps the specific idea is to harvest coding textbooks as training data for LLMs?

by bluelightning2k

5/22/2025 at 9:13:24 PM

Pieces is (correction: used to be, prior to the AI slopification) an app for storing code snippets. so i think you can imagine the general idea of, e.g., "cool API usage example from a YouTube video, let me screenshot it!"

by cAtte_

5/22/2025 at 1:45:33 PM

I'm guessing to automatically scrape videos for future training rounds.

by eurekin

5/22/2025 at 9:29:54 PM

> can't say I've ever wanted to transcribe code from an image. That seems super niche.

This is nightmare for endpoint protection. Imagine rogue employees snapping pics of your proprietary codebase and then using this to reassemble it.

by potato-peeler

5/22/2025 at 3:39:24 PM

Eh, imagine poor documentation where people take screenshots of steps and don't write them out.

I can also imagine plenty of YouTube tutorials that type the code live... seems fairly useful

by blharr

5/22/2025 at 11:04:45 AM

Neat article, but I feel like I have no idea why they're doing this! Is transcribing code from images really such a big use case?

by camtarn

5/22/2025 at 12:39:52 PM

The product appears to be similar to Microsoft's embattled Recall feature. In order to remember your digital life it takes frequent screenshots.

by SloopJon

5/22/2025 at 12:06:22 PM

From an accessibility standpoint, yes. To be able to pattern match where you are in I.D.E without using an accessibility api

by FloatArtifact

5/22/2025 at 12:01:54 PM

> To best support software engineers when they want to transcribe code from images, we fine-tuned our pre-processing pipeline to screenshots of code in IDEs, terminals, and online resources like YouTube videos and blog posts.

Even with these examples that seems like a very narrow use case.

by dewey

5/22/2025 at 4:03:16 PM

It worries me that stuff like that becoming easier will lead to wacky data pipelines being normalized (pulling display output off systems and "scraping" it to get data, of dubious quality, versus just building a proper interface). The kind of crowd that likes "low code" tools like MSFT's "Power Automate" is going to love to make Rube Goldberg nightmares out of tools like this.

It fills me with a deep sadness that we created deterministic machines then, though laziness, exploit every opportunity to "contaminate" them with sloppy practices that make them produce output with the same fuzzy inaccuracy as human brains.

Old man yells a neural networks take: We're entering a "The Machine Stops" era where nobody is going to know how to formulate basic algorithms.

"We need to add some numbers. Let's point a camera at the input, OCR it, then feed it to an LLM that 'knows math'. Then we don't have to figure out an algorithm to add numbers."

I wish compute "cost" more so people would be forced to actually make efficient use of hardware. Sadly, I think it'll take mass societal and infrastructure collapse for that to happen. Until it does, though, let the excess compute flow freely!

by EvanAnderson

5/22/2025 at 4:18:57 PM

asimov - The feeling of power.

by jocoda

5/22/2025 at 2:07:07 PM

I guess it would be excellent to evade security monitors to take unauthorized copies of your employers codebase.

by gosub100

5/22/2025 at 3:59:19 PM

has anyone tried feeding the admittedly noisy OCR-ed text -at a document level - to an LLM for making sense? Presumably some of the less capable ones should be quite affordable and accurate at scale as well.

by bobosha

5/22/2025 at 4:34:01 PM

OCR is the biggest XY problem.

Stop accepting PDFs and force things to use APIs ...

by lesuorac

5/23/2025 at 2:48:03 AM

Even small upscale model trained on texts should do better than big generic.

by MoonGhost

5/22/2025 at 12:04:28 PM

Anything that mentions tesseract is about 10 years out of date at this point.

by abc-1

5/22/2025 at 1:25:48 PM

Quite simply, you’re completely wrong. Modern tesseract versions include a modern LSTM AI. It can very affordably be deployed on CPU, yet its performance is competitive with much more expensive large GPU-based models. Especially if you handle a high volume of scans, chances are that tesseract will have the best bang per buck.

by fxtentacle

5/22/2025 at 2:56:25 PM

My company probably spent close to 6 figures overall creating Tesseract 5 custom models for various languages. Surya beats them all and is open source (and quite faster).

by ianhawes

5/22/2025 at 11:22:37 PM

Surya weights for the models are licensed cc-by-nc-sa-4.0. They have an exception for small companies. If you're company is not small you either need to pay them or use them illegally.

Their training code and data is closed source. They are barely open weight and only inference is open source.

by booder1

5/22/2025 at 1:57:24 PM

i remember that you could not train it your self in a font like you could in older versions, it that still the case?

by nicman23

5/22/2025 at 12:11:59 PM

5.5.0 released November last year. Still a very active project as far as I can tell and runs on CPU. Even compared to best open source GPU option it is still pretty good. VLMs work very differently and don't work as well for everything. Why is it out of date?

by booder1

5/22/2025 at 4:33:37 PM

I don't know that that is true: https://researchify.io/blog/comparing-pytesseract-paddleocr-...

Using Surya gets you significantly better results and makes almost all the work detailed in the article largely unnecessary.

by cbsmith

5/22/2025 at 10:59:27 PM

Surya weights for the models are licensed cc-by-nc-sa-4.0 so not free for commercial usage. Also, as far as I know, the training data is 100% unavailable. Given they use well trained, but standard models, it isn't really open source and barely, maybe, open weight. I kinda hate how their repo says gpl cause that is only true for the inference code. The training code is closed source.

by booder1

5/23/2025 at 5:02:56 PM

I did not know that the training code is closed source. That is troubling.

by cbsmith

5/22/2025 at 12:07:54 PM

Well, at least I can apt-get install tesseract.

That doesn't hold for any of the GPU-based solutions, last time I checked.

by amelius

5/22/2025 at 12:37:01 PM

I just built a pipeline with tesseract last year. What's better that is open source and runnable locally?

VLLM hallucination is a blocker for my use case.

by krapht

5/22/2025 at 1:56:02 PM

If you are stuck with open source, then your options are limited.

Otherwise I'd say just use your operating system's OCR API. Both Windows and MacOS have excellent APIs for this.

by criddell

5/22/2025 at 1:07:58 PM

How is a hallucination worse than a Tesseract error?

by stavros

5/22/2025 at 1:28:21 PM

Because the VLM doesn't know it hallucinated. When you get a Tesseract error you can flag the OCR job for manual review.

by krapht

5/22/2025 at 1:58:33 PM

Latter is more likely to get debugged.

by gessha

5/22/2025 at 1:37:32 PM

It could hallucinate obscene language, something which is less likely with classic OCR.

by amelius

5/22/2025 at 1:18:24 PM

Hallucinations are hard to detect unless you are a subject-matter expert. I don't have direct experience with Tesseract error detection.

by jgalt212

5/22/2025 at 3:56:46 PM

Making OCR more accurate for regular text (e.g. data extraction from documents) would be useful; not sure how useful code transcription is

by sushid

5/22/2025 at 12:35:21 PM

[dead]

by gitroom

5/22/2025 at 1:28:12 PM

Tesseract OCR was created by digital (DEC) in 19_8_5 (yes, 40 not four YEARs ago). Now go back and read the article and ROFL with me.

by vaxman

5/22/2025 at 3:19:20 PM

What is this argument? Much software we use today was created in the 80s.

by ivanjermakov

5/23/2025 at 7:01:22 AM

Not the actual implementations heh ...I heard even Linus has dropped support for the 486. Even the infra is finally giving way...did you see the NVLINK SPINE announcement a few days ago? It's going to be deployed in Stargate UAE that was announced Thursday.

by vaxman

5/22/2025 at 4:06:08 PM

Unix was created in _1971_ and here we are still running processes and shells like it’s the 70s. Why not just have an LLM dream up the output?

by rafram

5/23/2025 at 6:36:24 AM

No son, Linux is not a version of Unix anymore than MINIX is.

NeXTStep was real UNIX, but macOS is not.

BTW, I was taught to program in C by one of the original core Unix team members and I worked for DEC long before I could have discussed TesseractOCR with people who didn't. Keep those ignorant downvotes commin'

by vaxman

5/22/2025 at 1:38:38 PM

The original tesseract OCR has no neural nets. It bare little resemblance to the modern version.

by Onavo

5/22/2025 at 2:26:30 PM

It's still 40.

Why not use Ollama-OCR?

by vaxman

5/22/2025 at 4:08:12 PM

I’ve tested a bunch of vision models on particularly difficult documents (handwritten in a German script that’s no longer used), and I have yet to be impressed. They’re good at BSing to the point that you almost think they nailed it, until you realize that it’s mostly/all made-up text that doesn’t appear in the document.

by rafram

5/22/2025 at 4:52:49 PM

> It's still 40.

Is it, though? If the important parts of the code are new, does it matter that other parts are older or derived from older code? (Of course, I think this whole line of thought is pointless; what matters is not age, but how well it works, and tesseract generally does seem to work.)

by yjftsjthsd-h

5/23/2025 at 6:51:04 AM

Yeah it is, it does (especially with OOP) and "ABBYY" kicked Tesseract's arse a long time ago anyway.

Maybe try OpenAI GPT-4o or Google's Document AI https://cloud.google.com/document-ai

by vaxman

5/22/2025 at 3:53:10 PM

Because I benchmarked both on my dataset and found that Tesseract was better for my use-case?

by krapht