Surely the next step is to tweak the open source font with AI to slightly better target the desired font?
Sometimes you're trying to replicate a logo and they've hand tweaked a letter or two, or stretched it.
edit: (Google Fonts actually have fonts you can natively stretch and some with replacement character variants) I wonder if the tool uses that to better match?)
Sometimes you just don't want to pay for a fancy font and I believe font shapes are not copyrightable?
I’m curious to know how this model tackles newly released fonts. How difficult would it be if this model needs to recognise font in a different language?
Graphic designer here. A font recognition ai is sorley needed. Gemini and its competitors flat out lie when asked and Adobe Illustrators Retype is laughably bad. The problem I face almost every day is not to find a close match but to find the actual font in use, commercial or not.
Surely the next step is to tweak the open source font with AI to slightly better target the desired font?
Sometimes you're trying to replicate a logo and they've hand tweaked a letter or two, or stretched it.
edit: (Google Fonts actually have fonts you can natively stretch and some with replacement character variants) I wonder if the tool uses that to better match?)
Sometimes you just don't want to pay for a fancy font and I believe font shapes are not copyrightable?
I’m curious to know how this model tackles newly released fonts. How difficult would it be if this model needs to recognise font in a different language?
Graphic designer here. A font recognition ai is sorley needed. Gemini and its competitors flat out lie when asked and Adobe Illustrators Retype is laughably bad. The problem I face almost every day is not to find a close match but to find the actual font in use, commercial or not.
whatthefont and identifont used to work well, but they've been overwhelmed by new designs which are not used often enough to warrant inclusion.
I just use Rookledge's Type Finder and a battered copy of Precision Type 5.0
Curious why the model architecture wasn’t talked about at all? Did I miss that part?
pretty basic
``` from torchvision import models
# Avoid downloading pretrained weights; we load trained checkpoint weights. model = models.resnet18(weights=None) model.fc = nn.Linear(model.fc.in_features, num_classes) ```
https://github.com/mixfont/lens/blob/main/lens_inference.py#...
Instructive, with a rewarding repo for your time.